Visão Geral
Este Curso Deep Learning and Development of Generative AI Models Fundamentals, analisa brevemente os conceitos de aprendizagem profunda e, em seguida, ensina o desenvolvimento de modelos generativos de IA.
Pre-Requisitos
Os alunos deverão ter experiência prévia no desenvolvimento de modelos de Deep Learning, incluindo arquiteturas como Redes Neurais artificiais feed-forward, recorrentes e convolucionais.
Conteúdo Programatico
Review of Core Python Concepts (**if needed – depends on tool context**)
- Anaconda Computing Environment
- Importing and manipulating Data with Pandas
- Exploratory Data Analysis with Pandas and Seaborn
- NumPy ndarrays versus Pandas Dataframes
Overview of Machine Learning / Deep Learning
- Developing predictive models with ML
- How Deep Learning techniques have extended ML
- Use cases and models for ML and Deep Learning
Hands on Introduction to Artificial Neural Networks (ANNs) and Deep Learning
- Components of Neural Network Architecture
- Evaluate Neural Network Fit on a Known Function
- Define and Monitor Convergence of a Neural Network
- Evaluating Models
- Scoring New Datasets with a Model
Hands on Deep Learning Model Construction for Prediction
- Preprocessing Tabular Datasets for Deep Learning Workflows
- Data Validation Strategies
- Architecture Modifications for Managing Over-fitting
- Regularization Strategies
- Deep Learning Classification Model example
- Deep Learning Regression Model example
- Trustworthy AI Frameworks for this DL prediction context
Generative AI fundamentals:
- Generating new content versus analyzing existing content
- Example use cases: text, music, artwork, code generation
- Ethics of generative AI
Sequential Generation with RNN
- Recurrent neural networks overview
- Preparing text data
- Setting up training samples and outputs
- Model training with batching
- Generating text from a trained model
- Pros and cons of sequential generation
Variational Autoencoders
- What is an autoencoder?
- Building a simple autoencoder from a fully connected layer
- Sparse autoencoders
- Deep convolutional autoencoders
- Applications of autoencoders to image denoising
- Sequential autoencoder
- Variational autoencoders
Generative Adversarial Networks
- Model stacking
- Adversarial examples
- Generational and discriminative networks
- Building a generative adversarial network
Transformer Architectures
- The problems with recurrent architectures
- Attention-based architectures
- Positional encoding
- The Transformer: attention is all you need
- Time series classification using transformers
Overview of current popular large language models (LLM):
- ChatGPT
- DALL-E 2
- Bing AI
Medium sized LLM on in your own environment:
- tanford Alpaca
- Facebook Llama
- Transfer learning with your own data in these contexts