URL study guide

![CDATA[https://studiegids.vu.nl/en/courses/2024-2025/XB_0025]]>

Course Objective

![CDATA[1. Knowledge and understanding of:
- Optimization techniques: Derivative-Free Optimization (Hill Climbing, Local Search), Gradient-based Optimization (Gradient Descent and Stochastic Gradient Descent)
- Evolutionary Algorithms
- Neural Networks (Fully-Connected Networks, Convolutional Networks)
- Sampling Methods: Metropolis-Hastings Algorithm, Simulated Annealing
- Reinforcement Learning: Q-learning
- Neuroevolution: Neural Architecture Search, NEAT. 2. Applying knowledge and understanding:
- How before mentioned optimization algorithms work and where to use them.
- How to formulate an evolutionary algorithm for a specific problem.
- How to analyze an optimization algorithm.
- What neural network fits best for given problem. 3. Making judgments:
- What optimization algorithm to use for given optimization problem (either Derivative-Free method or Gradient-based method).
- What layers of a neural network are suitable in a given problem. 4. Communication skills
- Presenting an analysis in a written form (a short report) for each assignment.]]>

Course Content

![CDATA[In the course Computational Intelligence, we will focus mainly on computational aspects of Artificial Intelligence, namely, optimization algorithms for solving learning problems. Specifically, we will consider problems that cannot be solved using information about gradient due to their combinatorial character or complexity of the objective function (e.g., non-differentiability, blackbox objective function). These problems pop up in computer science and AI, such as, identification of biological systems, task scheduling on chips, robotics, finding optimal architecture of neural networks. For this purpose, we will introduce different classes of algorithms that can be used to tackle these problems, namely, hill climbing and local search, and evolutionary algorithms. Additionally, we explain sampling methods (Markov Chain Monte Carlo) and population-based sampling methods, and indicate how they are linked to evolutionary algorithms. In the second part of the course, we will discuss neural networks as current state-of-the-art modeling paradigm. We will present basic components of deep learning, such as, different layers (e.g., linear layers, convolutional layers, pooling layers, recurrent layers), non-linear activation functions (e.g., sigmoid, ReLU), and how to use them for specific problems. At the end of the course, we will touch upon alternative approaches to learning using Reinforcement Learning and Bayesian Optimization. We will conclude the course with a recently revived field of neuroevolution that aims for utilizing evolutionary algorithms in training neural networks.]]>

Teaching Methods

![CDATA[Lectures and practical assignments.]]>

Method of Assessment

![CDATA[The final grade is calculated based on the final exam (50 points) and 5 individual practical assignments (10 points each, 50 points in total). To pass the course, students are required to obtain at least 25 points from the final exam, and 55 points in total including all the points from the exam and assignments. The exam can be retaken (a resit). Solutions to practical assignments must be provided within given deadlines. There is no resit option for practical assignments.]]>

Literature

![CDATA[The literature will be made available on Canvas.]]>

Target Audience

![CDATA[Bachelor Artificial Intelligence (track Intelligent Systems)]]>

Recommended background knowledge

![CDATA[CalculusAlgebraStatisticsProgramming in Python]]>
Academic year1/09/2431/08/25
Course level6.00 EC

Language of Tuition

  • English

Study type

  • Premaster
  • Bachelor