Large self-supervised (pre-trained) models (such as Large Language Models or LLMs) have transformed various data-driven fields, such as natural language processing (NLP). In this course, students will gain a thorough introduction to self-supervised learning techniques for NLP applications. Through lectures, assignments, and a final project, students will learn the necessary skills to design, implement, and understand their own self-supervised neural network models using the Pytorch framework.

Note: The course is different from 601.771 (offered in the fall semesters) which is focused on advanced topics in recent papers and is geared toward grad students that want to specialize in the latest developments in self-supervised models.

Prerequisites: (1) Data Structures (601.226), (2) Background in Natural Language Processing & Machine Learning or having finished one of the relevant courses such as Machine Learning (CS 475/675), Machine Learning: Deep Learning (CS 482/682), Natural Language Processing (CS 465/665), Machine Translation (CS 468/668). (3) All the class assignments will be in Python/PyTorch. If you don’t know Python or PyTorch but have experience in other programming languages (Java, C++, etc.) you can probably pick Python/PyTorch pretty quickly. (4) Calculus and linear algebra: you should be comfortable with matrix operations (matrix multiplication, transpose, inverse, dot product, gradients). (5) Probability: basic probability properties (conditionals, marginals, mean, standard deviation), distributions (normal, categorical, etc.).

Relevant Courses at Hopkins: This course has some overlap with "Natural Language Processing" (EN.601/665), and "Artificial Agents" (EN.601.470/670), though the courses have different focuses.

Logistics




Assignments

The homework is your opportunity to practice doing the thing. The lectures and office hours hopefully provide good intuition and motivation and justification for the skills we want you to develop, but the best way to develop those skills is by trying to solve the problems yourself. The practice is far more important than the solution.

The course has 7 ~weekly assignments which will improve both your theoretical understanding and your practical skills. All assignments contain both written questions and programming parts (mainly in Python). They will be released on this website, and submissions should be uploaded to Gradescope.

Here is a tentative list of topics for the assignments:

# Focus
#1 Algebra, calculus, probability, optimization (gradient descent) recap, understanding softmax function, loss functions (cross-entropy, MSE, etc.), a machine learning problem (classification, evaluation),
#2 PyTorch introduction, automatic differentiation, computation graph, basic feedforward network and backpropagation
#3 Neural language model with feedforward network, evaluating language modeling, count-based models, decoding language models
#4 Recurrent neural language model and evaluation; Transformers
#5 Fine-tuning LMs, prompting language models, fine-tuning them, distributed tuning.
#6 Prompt engineering, in-context learning; Retrieval-augmented language models
#7 Alignment with instruction-tuning, alignment with [human] feedback

Midterm exams/quizzes

There will be in-class midterms. The midterm exams will be paper-based and during the usual class time. These midterm exams aims to evaluate students' progress and understanding of ideas presented in the first two-third of the semester, which will serve as a foundation for your project and the material covered in the final weeks of the class. The exams will assess students' mastery of the topics discussed in the lectures and weekly homework assignments. The exams will also provide feedback to both the student and the instructor, and identify areas that need improvement to inform further learning and teaching.


Final project

The objective of the final project is to make use of what you have learned during this course to solve a hard problem.

The final project milestones include: (1) A project proposal, (2) A project midway report, (3) progress update presentation, (4) a final report, (5) a final project poster summarizing the technical aspects of the project. See the course calendar for the due dates.



Content Schedule

Each session will involve an instructor-led presentation on a focused topic self-supervised models. There will be weekly assignments related to class presentations, a midterm exam, and a final project.

The current class schedule is below (subject to change):

Date Topic Course Materials Events Deadlines
#1 - Tue Jan 23 Course introduction:
  • Course overview
  • Plan and expectations
[slides: pptx, pdf]
Suggested reading: Dive into Deep Learning: Linear Algebra in PyTorch<
Additional Reading:
  1. Python / Numpy Tutorial (with Jupyter and Colab)
  2. Optimization: Stochastic Gradient Descent
HW1 is released! [tex]
#2 - Thu Jan 25 Language modeling:
  • Definitions and history,
  • Counting and n-grams,
  • Measuring LM quality,
  • Language modeling as a learning problem
[slides: pptx, pdf]
Suggested Reading: Jurafsky & Martin Chapter 3
Additional Reading:
  1. Prediction and Entropy of Printed English (the foundational paper by Shannon on language compression and uncertainty)
  2. Google N-grams (very insightful trends over time)
#3 - Tue Jan 30 Feedforward networks:
  • Definitions
  • Brief history
  • Background (algebra + optimization)
  • Analytical Backprop
[slides: pptx, pdf]
Suggested Reading: Jurafsky & Martin Chapter 7
Additional Reading:
  1. Neural Networks: the Architecture
  2. Dive into Deep Learning: Multilayer Perceptron
  3. PyTorch documentation
  4. These tutorials do a good job of introducing PyTorch.
HW2 is released! [tex] HW1 due
#4 - Thu Feb 1 Feedforward networks:
  • Algebra recap
  • Analytical backprop
  • Backprop in practice
[slides: pptx, pdf]
Suggested Reading: Jurafsky & Martin Chapter 7
Additional Reading:
  1. Neural Networks: Backpropagation
  2. Neural Networks: Training and empirical tips
  3. Neural Networks: data and loss
  4. Computing Neural Network Gradients
  5. Learning representations by back-propagating errors (the original backpropagation paper)
#5 - Tue Feb 6 Feedforward networks:
  • Backprop in practice
  • Practical tips
[slides: pptx, pdf]
Suggested Reading: Deep Learning Tuning Playbook
Additional Reading:
  1. Dive into Deep Learning: Practitioners Guide to Neural Networks
  2. Learning long-term dependencies with gradient descent is difficult (one of the original vanishing gradient papers)
HW3 is released! [tex] HW2 due
#6 - Thu Feb 8 Feeding text to neural networks:
  • Tokenization and subwords
  • Fixed-window MLP LMs
[slides: pptx, pdf]
Suggested Reading: Revisiting Simple Neural Probabilistic Language Models
Additional Reading:
  1. Huggingface tutorials on Tokenization
#7 - Tue Feb 13 Quiz 1 Topics: everything discussed in class until the beginning of class #6 HW4 is released! [tex] HW3 due
#8 - Thu Feb 15 Recurrent Neural LMs:
  • Introducing RNNs
  • Training RNNs
  • RNNs for natural language and language modeling
  • RNNs: Pros and Cons
  • Sampling from LMs
  • Pre-training RNNs
[slides: pptx, pdf]
Suggested Reading: The Curious Case of Neural Text Degeneration
Additional Reading:
  1. CS224N course notes on RNNs
  2. Dive into Deep Learning: Recurrent Neural Networks
  3. The Unreasonable Effectiveness of Recurrent Neural Networks (blog post overview)
  4. Deep contextualized word representations (ELMo paper)
#9 - Tue Feb 20 Recurrent Neural LMs:
  • Sampling from LMs
  • Bonus: Pre-training RNNs
[slides: pptx, pdf]
Transformer LMs:
  • Self-attention
  • Transformer LMs
  • Positional embeddings
[slides: pptx, pdf]
Suggested Reading: Attention Is All You Need
Additional Reading:
  1. Dive into Deep Learning: Attention Mechanism
  2. The Illustrated Transformer
  3. The Annotated Transformer
HW5 is released! [tex] HW4 due
#10 - Thu Feb 22 Transformer LMs:
  • Efficiency considerations
  • Architectural variants
  • Notable models
[slides: pptx, pdf]
Suggested Reading: LLAMA 2: Open Foundation and Fine-Tuned Chat Models
Additional Reading:
  1. The Illustrated BERT, ELMo, and co
  2. The Illustrated GPT-2
#11 - Tue Feb 27 Transformer LMs:
  • Notable models
  • Training tips
[slides: pptx, pdf]
Suggested Reading: Language Models are Few-Shot Learners (GPT3 paper)
#12 - Thu Feb 29 Adapting LMs:
  • Adaption as fine-tuning
  • Parameter-efficient tuning
[slides: pptx, pdf]
Suggested Reading: The Power of Scale for Parameter-Efficient Prompt Tuning
Additional Reading:
  1. Prefix-Tuning: Optimizing Continuous Prompts for Generation
  2. Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
HW6 is released! [tex] HW5 due
#13 - Tue Mar 5 Adapting LMs:
  • Adaption as in-context learning
  • ICL: Making sense of it
  • Prompt engineering
  • Multi-step prompting
  • Failures of ICL
[slides: pptx, pdf]
Suggested Reading: Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Additional Reading:
#14 - Thu Mar 7 Alignment of LMs:
  • Alignment: definitions
  • Instruction-tuning
[slides: pptx, pdf]
Suggested Reading: Scaling Instruction-Finetuned Language Models (FLAN paper)
Additional Reading:
  1. Generalization via Declarative Instructions on 1600+ NLP Tasks
HW6 due
#15 - Tue Mar 12 Introducing final projects:
  • Defining final projects
  • Tips for successful project
[slides: pptx, pdf]
Alignment of LMs:
  • RLHF and variants
[slides: pptx, pdf]
Suggested Reading: Training language models to follow instructions with human feedback (GPT3 + RLHF paper)
Additional Reading:
  1. Illustrating Reinforcement Learning from Human Feedback
  2. Learning to summarize from human feedback
  3. Deep reinforcement learning from human preferences (an early RLHF paper)
#16 - Thu Mar 14 Quiz 2 Topics: everything discussed in class until the beginning of class #15
#17 - Tue Mar 19 No Class - Spring Break
#18 - Thu Mar 21 No Class - Spring Break
#19 - Tue Mar 26 Alignment of LMs:
  • Alignment: failures/open questions
  • Simplifying RLHF
  • Alignment with self-generated instructions
  • Value alignment
[slides: pptx, pdf]
Suggested Reading: Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Additional Reading:
  1. Concrete Problems in AI Safety
  2. Scaling Laws for Reward Model Overoptimization
  3. Self-Instruct: Aligning Language Models with Self-Generated Instructions
HW7 released! [tex]
#20 - Thu Mar 28 Feeding lots of things to LMs
  • Delving into positional encoding
  • Length generalization
[slides: pptx, pdf]
Suggested Reading: Transformer Language Models without Positional Encodings Still Learn Positional Information
Additional Reading:
  1. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
  2. The Impact of Positional Encoding on Length Generalization in Transformers
Apr 1 Project proposals deadline
#21 - Tue Apr 2 Feeding lots of things to LMs
  • Retrieval-augmentation
[slides: pptx, pdf]
Connecting language to outside world:
  • Connecting vision and language
[slides: pptx, pdf]
Suggested Reading: Improving language models by retrieving from trillions of tokens
Additional Reading:
  1. REALM: Retrieval-Augmented Language Model Pre-Training
  2. An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks
  3. When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories.
  4. Scaling Laws for Neural Language Models
#22 - Thu Apr 4 Connecting language to outside world:
  • Connecting vision and language
  • Generative vision-language
[slides: pptx, pdf]
Suggested Reading: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Additional Reading:
HW7 due
#23 - Tue Apr 9 Connecting language to outside world:
  • Transformers for Audio/speech
  • LMs for coding
  • LMs and grounded actions
  • Open questions
[slides: pptx, pdf]
Suggested Reading: Evaluating Large Language Models Trained on Code
Additional Reading:
  1. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
  2. PaLM-E: An Embodied Multimodal Language Model
#24 - Thu Apr 11 Efficiency considerations:
  • Quantization
  • Distillation
  • Distributed training
[slides: pptx, pdf]
Suggested Reading: LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
Additional Reading:
  1. Data-Free Quantization Through Weight Equalization and Bias Correction
  2. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
  3. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
  4. GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism
#25 - Tue Apr 16 Scaling LMs:
  • Thinking about computation cost
  • Optimal scaling
[slides: pptx, pdf]
Suggested Reading: Training Compute-Optimal Large Language Models
Additional Reading:
  1. The FLOPs Calculus of Language Model Training
  2. Scaling Laws for Neural Language Models
#26 - Thu Apr 18 Scaling LMs:
  • Why we didn't scale earlier?
  • When scale does not help
  • Is scale all you need?
[slides: pptx, pdf]
Social concerns about LMs:
  • Bias, fairness and toxic language
[slides: pptx pdf]
Suggested Reading: Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Additional Reading:
  1. UnQovering Stereotyping Biases via Underspecified Questions
  2. Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias
#27 - Tue Apr 23 Social concerns about LMs:
  • Hallucination
  • Truthfulness and veracity
  • Legal considerations and fair use
  • Reflections about future, dangers and misuses
[slides: pptx pdf]
Suggested Reading: Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain
Additional Reading:
  1. Red Teaming Language Models with Language Models
  2. TruthfulQA: Measuring How Models Mimic Human Falsehoods
  3. Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
  4. Foundation Models and Fair Use
  5. Copyright and the Generative-AI Supply Chain
Apr 24 Midway reports deadline
#28 - Thu Apr 25 Project progress presentation
#29 - Tue Apr 30 No Class - Reading Days
#30 - Thu May 2 No Class - Reading Days
May 13 Final project reports
May 13 Final project poster session (6-9pm)

Reference text

There is no required text. Though the following can be useful:


Relevant Resources

Here are several resources available for free:

Besides these resources, we will try our best to satisfy individual needs through discussion.


Code of Conduct

The strength of the university depends on academic and personal integrity. In this course, you must be honest and truthful, abiding by the Computer Science Academic Integrity Policy:

Cheating is wrong. Cheating hurts our community by undermining academic integrity, creating mistrust, and fostering unfair competition. The university will punish cheaters with failure on an assignment, failure in a course, permanent transcript notation, suspension, and/or expulsion. Offenses may be reported to medical, law or other professional or graduate schools when a cheater applies. Violations can include cheating on exams, plagiarism, reuse of assignments without permission, improper use of the Internet and electronic devices, unauthorized collaboration, alteration of graded assignments, forgery and falsification, lying, facilitating academic dishonesty, and unfair competition. Ignorance of these rules is not an excuse.

Academic honesty is required in all work you submit to be graded. Except where the instructor specifies group work, you must solve all homework and programming assignments without the help of others. For example, you must not look at anyone else’s solutions (including program code) to your homework problems. However, you may discuss assignment specifications (not solutions) with others to be sure you understand what is required by the assignment. If your instructor permits using fragments of source code from outside sources, such as your textbook or on-line resources, you must properly cite the source. Not citing it constitutes plagiarism. Similarly, your group projects must list everyone who participated.

In the above paragraph "outside sources" also include content that was produced by an AI assistant like ChatGPT. This follows either by treating the AI assistant as a person for the purposes of this policy (controversial) or acknowledging that the AI assistant was trained directly on people's original work. Thus, while you are not forbidden from using these tools, you should consider the above policy carefully and quote where appropriate. Assignments that are in large part quoted from an AI assistant are very unlikely to be evaluated positively. In addition, if a student's work is substantially identical to another student's work, that will be grounds for an investigation of plagiarism regardless of whether the prose was produced by an AI assistant.

Falsifying program output or results is prohibited. Your instructor is free to override parts of this policy for particular assignments. To protect yourself: (1) Ask the instructor if you are not sure what is permissible. (2) Seek help from the instructor, TA or CAs, as you are always encouraged to do, rather than from other students. (3) Cite any questionable sources of help you may have received.

Report any violations you witness to the instructor. You can find more information about university misconduct policies on the web for undergraduates and graduates students.

Johns Hopkins University is committed to equal opportunity for its faculty, staff, and students. To that end, the university does not discriminate on the basis of sex, gender, marital status, pregnancy, race, color, ethnicity, national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, military status, immigration status or other legally protected characteristic. The University's Discrimination and Harassment Policy and Procedures provides information on how to report or file a complaint of discrimination or harassment based on any of the protected statuses listed in the earlier sentence, and the University’s prompt and equitable response to such complaints.