Key Concepts in Generative AI with LLMs—Part 1
Learning points and resources from Week 1 of the ‘Generative AI with Large Language Models’ Coursera course by DeepLearning.AI and Amazon Web Services
In collaboration with Amazon Web Services, DeepLearning.AI recently developed the ‘Generative AI with Large Language Models’ course on Coursera, aimed at equipping anyone with the fundamentals of how Generative AI works, and how to deploy it in real-world applications.
This article is my attempt to summarise the key learnings, not just for my own reference, but also for readers like you so you can understand the essential concepts in a succinct manner.
Table of Contents
- Introduction to Generative AI and LLMs
- The Transformer Architecture
- Prompt Engineering
- Generative AI Project Lifecycle
- Computational Challenges of Training LLMs
- Compute-Optimal Models
- Pre-training for Domain Adaptation