Generative AI Engineering and Fine-Tuning Transformers

This course is part of multiple programs. Learn more

Instructors: Joseph Santarcangelo +2 more

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

What you'll learn

  •   Sought-after, job-ready skills businesses need for working with transformer-based LLMs in generative AI engineering
  •   How to perform parameter-efficient fine-tuning (PEFT) using methods like LoRA and QLoRA to optimize model training
  •   How to use pretrained transformer models for language tasks and fine-tune them for specific downstream applications
  •   How to load models, run inference, and train models using the Hugging Face and PyTorch frameworks
  • Skills you'll gain

  •   Large Language Modeling
  •   PyTorch (Machine Learning Library)
  •   Deep Learning
  •   Prompt Engineering
  •   Generative AI
  •   Performance Tuning
  •   Natural Language Processing
  • There are 2 modules in this course

    In this course, you’ll explore transformers and key model frameworks and platforms, including Hugging Face and PyTorch. You’ll begin with a foundational framework for optimizing LLMs and quickly advance to fine-tuning generative AI models. You’ll also learn advanced techniques such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized LoRA (QLoRA), and prompting. The hands-on labs will give you valuable, practical experience including loading, pretraining, and fine-tuning models using industry-standard tools. These skills are directly applicable in real-world AI roles and are great for showcasing in interviews. If you’re ready to take your AI career to the next level and strengthen your resume with in-demand Gen AI competencies, enroll today and start applying your new skills in just one week!

    Parameter Efficient Fine-Tuning (PEFT)

    Explore more from Machine Learning

    ©2025  ementorhub.com. All rights reserved