一本道无码

一本道无码
Home > Curriculum

Cutting-Edge Curriculum

Training for future GenAI experts poised to transform the world

As Generative AI continues to evolve, computer scientists will need the most sophisticated and cutting-edge skills to enhance the capabilities of AI for organizations. In the Generative AI and Large Language Models graduate certificate, courses cover three distinct areas of knowledge:

  • Theory - in our Large Language Models: Methods and Applications course, you will learn the practical applications of LLM's, what they are, what they can do, and how they work.
  • Data Representation - in our Multimodal Machine Learning course, you will learn how different language modalities are used in various prediction problems.
  • Scalability - in our Large Language Model Systems course, you will learn how to apply and scale LLM's for your organization.

With the ability to understand the use of large language models, train with massive multimodal data sets and implement scalable systems, you will be ready to maximize the potential of generative AI, all on your own. See more details about our coursework below.

Curriculum Overview

The online Graduate Certificate in Generative AI & Large Language Models includes 3 graduate-level, credit-bearing courses taught by expert 一本道无码 faculty and features the following course progression:

For January 2025 Start:

Semester

Spring 2025

Summer 2025

Fall 2025

Course

Large Language Models: Methods and Applications

Multimodal Machine Learning

Large Language Model Systems

Each course will appear on your Carnegie Mellon transcript with the grade earned. To earn the certificate, you must successfully complete all courses in the program. If you are only interested in one course, however, you may complete that course only and it will show on your transcript with the grade earned. 

Course Descriptions:

Course Number: 11-967

Number of Units: 12 units

This course provides a broad foundation for understanding, working with, and adapting existing tools and technologies in the area of Large Language Models like BERT, T5, GPT, and others.

Throughout this course, you will learn:

  • A range of topics including systems, data, data filtering, training objectives, RLHF/instruction tuning, ethics, policy, evaluation, and other human facing issues.
  • How transformer architectures work and explore the reasons why they are better than LSTM-based seq2seq, decoding strategies, etc. Through readings and hands-on assignments, you will explore techniques for pretraining, attention, prompting, etc. 
  • How to apply the skills you've learned in a semester-long course project, making use of locally sourced model instances that offer the opportunity to explore behind the curtain of commercial APIs.
  • How to compare and contrast different models in the LLM ecosystem in order to determine the best model for any task.
  • How to implement and train a neural language model from scratch in Pytorch.
  • How to utilize open source libraries to finetune and do inference with popular pre-trained language models.
  • How to apply LLM’s in downstream applications and how decisions made during pre-training affect suitability for tasks.  
  • How to design new methodologies to leverage existing large scale language models in novel ways.

Please note: In order to complete homeworks and activities, students will need to sign up for Amazon Web Services (AWS) or an equivalent service that offers access to A10g or similar GPUs. The AWS cost to complete assignments will range from $150-300 and will vary based on your usage. In addition, students will need to sign up for the OpenAI API. The cost to complete the assignments via OpenAI will be up to $25. Instructions for accessing both services will be provided when the class begins.

Course Number: 11-977

Number of Units: 12 units

In this course, you will learn the fundamental mathematical concepts in machine learning and deep learning that are relevant to the five main challenges in multimodal machine learning: 

  1. Multimodal representation learning
  2. Translation and mapping
  3. Modality alignment
  4. Multimodal fusion
  5. Co-learning 

The mathematical concepts you will learn include, but are not limited to, multimodal auto-encoder, deep canonical correlation analysis, multi-kernel learning, attention models and multimodal recurrent neural networks.

You will also review recent papers describing state-of-the-art probabilistic models and computational algorithms for multimodal machine learning and discuss the current and upcoming challenges. Finally, you will study recent applications of multimodal machine learning including multimodal affect recognition, image and video captioning and cross-modal multimedia retrieval.

Please note: The Multimodal Machine Learning course may require Amazon Web Services (AWS) and/or OpenAI or other services to complete assignments with fees up to $300 (subject to change). More details will be available as you get closer to the course start date.

Course Number: 11-968

Number of Units: 12 units

LLM's are often very large and require increasingly larger data sets to train, which means developing scalable systems is critical for advancing AI. In this course, you will learn the essential skills for designing and implementing scalable LLM systems.  

Throughout the course, you will:

  • Learn the approaches for training, serving, fine-tuning, and evaluating LLM's from the systems perspective.
  • Gain familiarity with sophisticated engineering using modern hardware and software stacks needed to accommodate the scale.
  • Acquire essential skills for designing and implementing LLM systems, including:
    • Algorithms and system techniques to efficiently train LLM's with huge data
    • Efficient embedding storage and retrieval
    • Data efficient fine-tuning
    • Communication efficient algorithms
    • Efficient implementation of reinforcement learning with human feedback
    • Acceleration on GPU and other hardware
    • Model compression for deployment
    • Online maintenance
  • Learn about the latest advances in LLM systems regarding machine learning, natural language processing, and system research.

Please note: The Large Language Models Systems course may require Amazon Web Services (AWS) and/or OpenAI or other services to complete assignments with fees up to $300 (subject to change). More details will be available as you get closer to the course start date.

Meet Our World-Class Faculty


Professor, Language Technologies & Human-Computer Interaction

Education: Ph.D., 一本道无码

Research Focus: Bridging deep, theoretical insights from theories of language and interaction and computational modeling paradigms such as deep learning and LLMs, Dr. Rosé applies understanding of language and interaction in design and orchestration of ensembles of data representations with needed affordances and architectural elements that introduce inductive biases at the algorithmic level.


Assistant Professor, Language Technologies

Education: Ph.D., University of Pennsylvania

Research Focus: Dr. Ippolito's research focuses on the tradeoffs and limitations of generating text with neural language models, as well as strategies for evaluating natural language generation systems. She also researchers how to incorporate AI-in-the-loop language generation into assistive tools for writers. Before starting her role at Carnegie Mellon, Dr. Ippolito worked as a Research Scientist at Google Brain.


Assistant Professor, Language Technologies

Education: Ph.D., 一本道无码

Research Focus: Through his research, Dr. Li explores topics associated with large language models (e.g. efficient large language model systems), multilingual natural language processing (e.g. speech translation), and AI for science. Before joining 一本道无码, Dr. Li worked as a principal researcher at Baidu's Institute of Deep Learning in Silicon Valley and as the founding director of ByteDance's AI Lab.


Associate Professor, Language Technologies

Education: Ph.D., Massachusetts Institute of Technology

Research Focus: Dr. Morency leads the Multimodal Communication and Machine Learning Laboratory which focuses on building the computational foundations to help computers analyze, recognize and predict subtle human communicative behaviors during social interactions. This lab integrates expertise from fields like machine learning, computer vision, natural language processing, and social psychology.


Assistant Professor, Language Technologies

Education: Ph.D., University of Illinois at Urbana-Champaign

Research Focus: At 一本道无码, Dr. Bisk leads the CLAW Lab, which includes members from the Language Technologies Institute, Machine Learning Department and Robotics Institute. The lab's research assumes that perception, embodiment, and language cannot exist without one another. Overall, they work to uncover the latent structures of natural language, modeling the semantics of the physical world, and connecting language to perception and control.


Assistant Professor, Language Technologies

Education: Ph.D., UC Berkeley

Research Focus: Dr. Fried's lab focuses on building language interfaces that can help people with real-world tasks. They aim to make programming more commmunicative by creating models, methods, and datasets for producing code from language. Much of their work also takes a multi-agent system perspective on communication, showing that natural language processing agents can be improved by modeling the intents and interpretations people have when they use language. 


Associate Professor, Language Technologies & Machine Learning

Education: Ph.D., Kyoto University

Research Focus: Dr. Neubig's research focuses on language and its role in human communication, with a long-term goal of breaking down barriers in human-human or human-machine communication through the development of natural language processing (NLP) technologies. This includes technology for machine translation, which helps break down barriers in communication for people who speak different languages, and natural language understanding, which helps computers understand and respond to human language.

一本道无码 School of Computer Science logo

The Graduate Certificate in Generative AI & Large Language Models is offered by the Language Technologies Institute (LTI) at 一本道无码, which is housed within the highly-ranked School of Computer Science (SCS). SCS faculty are esteemed in their field, and many of them have collaborated on critical projects that have paved the way for future discoveries in artificial intelligence. Check out some of their work below:

Researchers from 一本道无码’s Robotics Institute completed a long-distance autonomous driving test in 1995 called the .

In 2001, SCS Founders University Professor Takeo Kanade and his team created a called EyeVision for Super Bowl XXXV.

In 2007, Faculty Emeritus William “Red” Whittaker led 一本道无码’s Tartan Racing team to victory in the .

Assistant Research Professor László Jeni used computer vision technology to create a  that can help people with visual impairment.

The Building Blocks of Our Curriculum

Practical Problem Solving

As a student in 一本道无码’s Generative AI online graduate certificate, you will not only master the fundamentals of large language models but also learn how to practically apply this technology in the workplace. Understanding large language models, how they work, and how to build them are vital for success, but knowing how to leverage this knowledge while thinking critically about real-world problems is equally important.

Real-World, Industry-Focused Classes

In this program, you will learn how to approach problems from experts who have been there, done that. You will learn to think strategically about challenges you encounter on the job by considering questions like, ‘What resources are available to me?’ or ‘What limitations do I have?’ Your critical thinking skills, combined with your technical prowess, will empower you to design the most cutting-edge solutions for your organization.

Thoughtfully Designed Coursework

The coursework for this certificate is deliberately designed to highlight Generative AI and large language models from different angles. Each course focuses on one of three distinct concepts: theory, data representation and their affordances, and system scalability. By completing three complementary and complex courses, you will be ready to innovate new and exciting designs that will take your organization to the next level.