Key Concepts in Generative AI with LLMs—Part 1

Learning points and resources from Week 1 of the ‘Generative AI with Large Language Models’ Coursera course by DeepLearning.AI and Amazon Web Services

Zeya LT
10 min readOct 11, 2023
Photo by Kelvin Ang on Unsplash

In collaboration with Amazon Web Services, DeepLearning.AI recently developed the ‘Generative AI with Large Language Models’ course on Coursera, aimed at equipping anyone with the fundamentals of how Generative AI works, and how to deploy it in real-world applications.

This article is my attempt to summarise the key learnings, not just for my own reference, but also for readers like you so you can understand the essential concepts in a succinct manner.

Table of Contents

  1. Introduction to Generative AI and LLMs
  2. The Transformer Architecture
  3. Prompt Engineering
  4. Generative AI Project Lifecycle
  5. Computational Challenges of Training LLMs
  6. Compute-Optimal Models
  7. Pre-training for Domain Adaptation

1. Introduction to Generative AI and LLMs

--

--

Zeya LT

Data Scientist @ Grab • Former Police Officer • Master’s in Data Science & Analytics • Mid-Career Switcher • Father of Two