Instructor: Katharina von der Wense

What you'll learn

  •   Define feedforward networks, recurrent neural networks, attention, and transformers.
  •   Implement and train feedforward networks, recurrent neural networks, attention, and transformers.
  •   Describe the idea behind transfer learning and frequently used transfer learning algorithms.
  •   Design and implement their own neural network architectures for natural language processing tasks.
  • Skills you'll gain

    There are 4 modules in this course

    This course can be taken for academic credit as part of CU Boulder’s MS in Data Science or MS in Computer Science degrees offered on the Coursera platform. These fully accredited graduate degrees offer targeted courses, short 8-week sessions, and pay-as-you-go tuition. Admission is based on performance in three preliminary courses, not academic history. CU degrees on Coursera are ideal for recent graduates or working professionals. Learn more: MS in Data Science: https://www.coursera.org/degrees/master-of-science-data-science-boulder MS in Computer Science: https://coursera.org/degrees/ms-computer-science-boulder

    Sequence to Sequence Models, Attention, Transformers

    Transfer Learning

    Large Language Models

    ©2025  ementorhub.com. All rights reserved