Generative AI Language Modeling with Transformers

This course is part of multiple programs. Learn more

Instructors: Joseph Santarcangelo +2 more

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

What you'll learn

  •   Explain the role of attention mechanisms in transformer models for capturing contextual relationships in text
  •   Describe the differences in language modeling approaches between decoder-based models like GPT and encoder-based models like BERT
  •   Implement key components of transformer models, including positional encoding, attention mechanisms, and masking, using PyTorch
  •   Apply transformer-based models for real-world NLP tasks, such as text classification and language translation, using PyTorch and Hugging Face tools
  • Skills you'll gain

  •   PyTorch (Machine Learning Library)
  •   Text Mining
  •   Deep Learning
  •   Applied Machine Learning
  •   Generative AI
  •   Natural Language Processing
  •   Machine Learning Methods
  •   Large Language Modeling
  • There are 2 modules in this course

    The course covers multi-head attention, self-attention, and causal language modeling with GPT for tasks like text generation and translation. You will gain hands-on experience implementing transformer models in PyTorch, including pretraining strategies such as masked language modeling (MLM) and next sentence prediction (NSP). Through guided labs, you’ll apply encoder and decoder models to real-world scenarios. This course is designed for learners interested in generative AI engineering and requires prior knowledge of Python, PyTorch, and machine learning. Enroll now to build your skills in NLP with transformers!

    Advanced Concepts of Transformer Architecture

    Explore more from Machine Learning

    ©2025  ementorhub.com. All rights reserved