Short course — Large Language Models
Start date not set yet
Are you a developer, data analyst, data scientist, data engineer, entrepreneur, consultant, manager, or bachelor/master student with basic knowledge in programming? Take your understanding of Large Language Models to the next level with our in-depth, hands-on course on LLMs and Deep Learning.
In brief
About the LLM Course
The rise of Artificial Intelligence (AI) is transforming the way companies will have to operate. Especially with Large Language Models (LLMs) making AI more accessible than ever. But what exactly can you achieve with LLMs, and just as importantly, what are their limitations? We begin with an introduction to the fundamentals of LLMs before diving into practical implementation. Solid programming skills - preferably in Python - are therefore highly recommended. This course is perfect for:
- professionals with basic programming experience (preferably in Python);
- programmers actively working in the field of data;
- fata analysts, data scientists, and data engineers looking to deepen their expertise in working with and implementing LLMs;
- managers, consultants, or entrepreneurs with programming experience, or those who employ programmers for whom a deeper understanding of LLMs could be valuable;
- bachelor or mastersstudents with programming experience who want to expand their skills in the field of LLMs.
Course Outcome
By the end of this 7-day LLM course, you will have gained valuable insights into both the possibilities and constraints of LLMs. You will also have builtyour own Retrieval Augmented Generation (RAG) system to "chat" with your documents through an LLM.
Course programm
During this 7-day course, we blend theory with practice. You learn to build a fully functional RAG application.
Week 1: Introduction to Deep Learning and Natural Language Processing
We explore concepts such as: How does deep learning work? How does a computer learn to process language? What are Embeddings?
Week 2: Transformers and Transfer Learning
The Transformer architecture has revolutionized LLMs. This week you learn how these models are constructed and how to better apply knowledge from pretrained models to new tasks.
Week 3: Prompting, API Calls and Dockerization
This week covers how to create effective prompts, launch a model in a Docker environment and make it accessible to other programs via an API call.
Week 4: Preprocessing, Vector Databases and Semantic Search
In week 4 we get started with the basic ingredients for RAG: chatting with your own documents. This also includes document preprocessing, storing documents in a vector database and how they can be used for RAG.
Week 5: Retrieval Augmented Generation, Auto Encoders en JEPA
In week 5 we dive deeper into the different ways of working with embeddings. In addition to RAG, you explore Autoencoders (for tasks like anomaly detection) and developments in the Joint Embedding Predictive Architecture (JEPA).
Week 6: Automated Knowledge Graphs
Alongside RAG, you also learn how to work with automated knowledge graphs.
Week 7: User Interfaces
Ultimately, we want an interface that the user can interact with. In the last lesson we bring everything together: from preprocessing and the RAG model to the API and the user interface for interaction.
More information
Get to know the lecturere
Raoul Grouls is a lecturer-researcher at the HAN Research Center for AI & Data Science and holds a degree in Artificial Intelligence from Utrecht University. He has previously worked as a senior data scientist at various organizations such as Business & Decision and the international consulting firm Eraneos.
Contact us
Got a question? Contact us at ASK HAN. We're happy to help!