Artificial Intelligence Outline
Course Number: PYTH-158
Duration: 4 days (26 hours)
Format: Live, hands-on
Deep Learning with Python Training Overview
This hands-on, live Deep Learning with Python training course builds on our Comprehensive Data Science with Python class and teaches attendees the fundamentals of Deep Learning, and to implement artificial neural network (ANN) applications, using Keras and TensorFlow.
Location and Pricing
Accelebrate offers instructor-led enterprise training for groups of 3 or more online or at your site. Most Accelebrate classes can be flexibly scheduled for your group, including delivery in half-day segments across a week or set of weeks. To receive a customized proposal and price quote for private corporate training on-site or online, please contact us.
In addition, some Programming courses are available as live, online classes for individuals.
Objectives
- Learn the fundamental theory behind neural networks
- Model an arbitrary function using an artificial neural network (ANN)
- Practice interpreting loss metrics and convergence conditions
- Apply a neural net to a regression problem
- Understand regularization within the context of ANNs
- Construct 2D convolutional image classification architectures
- Perform a multi class classification
- Apply Deep Learning to sequential data using recurrent architectures (RNNS, LSTMs and GRUs)
- Implement dropout and LASSO as network regularization strategies
- Apply Deep Learning to a classification problem
- Implement image processing methods in Python and Keras
- Extend feed-forward network architectures to convolutional layers
- Apply Deep Learning to time series forecasting applications
- Automate ANN architecture selection using Autokeras
- Understand the concept of Latent Semantic Representations and word embeddings
Prerequisites
All attendees should have completed the Comprehensive Data Science with Python class or have equivalent experience.
Outline
- Why artificial neural networks? Advantages of ANNs
- Understanding the essential concepts
- Activation functions, optimizers, back-propagation
- Components and architectures of artificial neural networks
- Evaluate the performance of neural networks on a known function
- Define and monitor convergence of a neural network
- Model selection
- Scoring new datasets with a model
- Management and preparation of image data for Deep Learning models
- The dimensionality of image data
- Handling image metadata
- Conversion of images to NumPy arrays
- Python Image Library (PIL) and skimage
- Keras’ load_img() function
- Image standardization and resampling
- Augmentation strategies for image data
- Image data is multidimensional
- Overview of convolutional architectures
- Convolution layers act as filters
- Pooling layers reduce computation
- Data augmentation through image transformation for smaller datasets
- Image transformation using the pillow library
- Applying a model to a multi class labeled dataset
- Evaluating a confusion matrix for multiple classes
- Identify limitations of feed-forward ANN architectures for sequential data
- Modify model architecture to include recurrent (RNN) components
- Preprocessing time series data for ingestion into RNN models
- Examine improvements to RNNs: The LSTM and GRU networks
- Time series forecasting with recurrent architectures
- Time series forecasting with 1D convolutional architectures
- Text manipulation with TensorFlow
- Categorical representations and word embeddings
- Text embeddings as layers in an ANN
- Word2vec
- Exploiting pre-trained word embedding models
- Visualizing semantic relationships between words using t-SNE
- Exploiting pre-trained models (VGG16) for image classification
- Selecting layers to unlock for specific applications
- Transfer learning and fine tuning
- What is an autoencoder?
- Building a simple autoencoder from a fully connected layer
- Sparse autoencoders
- Deep convolutional autoencoders
- Applications of autoencoders to image denoising
- Sequential autoencoders
- Variational autoencoders
- Adversarial examples
- Generational and discriminative networks
- Building a simple generative adversarial network
- Generating images with a GAN
- The problems with recurrent architectures for sequential data
- Attention-based architectures
- Positional encoding
- The Transformer: attention is all you need
- Time series classification using transformers
- GPT-3 and the future of natural language generation
- Open AI Codex and the future of programmatic code generation

Training Materials
All Deep Learning training students receive comprehensive courseware.
Software Requirements
- Windows, Mac, or Linux with at least 8 GB RAM
- A current version of Anaconda for Python 3.x
- Related lab files that Accelebrate will provide