URL study guide
https://studiegids.vu.nl/en/courses/2025-2026/XM_0083Course Objective
1. Knowledge and understandingBe able to explain the main components of a deep learning architecture: fully-connected layers, convolutional layers, activation functions, LSTM, and GRU.Be able to explain and derive the backpropagation algorithm.Be able to explain different commonly used deep architectures.Be able to explain Variational Autoencoders and unsupervised representation learning.Be able to explain the difference between discriminative and generative models.2. Applying knowledge and understandingHow before-mentioned layers work and where to use them.How to formulate a neural network for a specific problem.How to analyze the performance of a neural network.What neural network fits best for a given problem.3. Making judgmentsWhat deep learning model to use for a given problem (e.g., generative vs discriminative).What layers of a neural network are suitable for a given problem.4. Communication skillsPresenting an analysis in a written form (a short report) for each assignment.5. Learning skillsAble to read (some) state-of-the-art papers.Able to develop (to some extent) available deep learning libraries.Course Content
Deep learning becomes the leading learning and modeling paradigm in machine learning. During this course, we will present basic components of deep learning, such as:different layers (e.g., linear layers, convolutional layers, pooling layers, recurrent layers);non-linear activation functions (e.g., sigmoid, ReLU);backpropagation;learning algorithms (e.g., ADAM);other (e.g., dropout).Further, we will show how to build deep architectures like AlexNet and ResNet. We will explain potential pitfalls and possible solutions, e.g., by using residual connections and dense architectures. After discussing discriminative models, we will turn them into generative models. We will start with linear latent variable models like the probabilistic PCA. Then we will discuss a non-linear version of the pPCA, namely, Variational Auto-Encoders (VAEs). Both pPCA and VAE are so-called prescribed models that require formulating the likelihood function. Next, we will introduce unsupervised representation learning, different ways to improve VAEs, and how to obtain disentangled representations. At the end of the course, we will outline recent developments in deep learning. Namely, we will discuss Reinforcement Learning and Deep Reinforcement Learning. In the end, we will introduce generalization and explainabilility- two fundamental topics of deep learning.
Teaching Methods
The course consists of two parts: a digital exam and practical assignments. The digital exam is supported by lectures (two/three per week). The assignments are supported by practical sessions led by TAs. The first two assignments are carried out individually and the remaining assignments are done in small groups. No resit is possible for the practical assignments.Method of Assessment
Final exam (50%) and practical assignments (50%). There are 4 assignments (2 of them are done individually; 2 of them are done in groups). The final exam must be passed with a sufficient grade (equivalent to a grade of 5.5 or higher). There is a resit for the exam. The practical assignments cannot be redone.Literature
The literature will be made available on Canvas. Two suggested books:Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep learning". MIT press. Tomczak, J. M. (2022). "Deep Generative Modeling". Springer, Cham.Target Audience
Master Artificial IntelligenceMaster Business AnalyticsRecommended background knowledge
Calculus Linear Algebra Statistics & Probability Theory Programming (Python) Machine LearningLanguage of Tuition
- English
Study type
- Master