URL study guide
https://studiegids.vu.nl/en/courses/2025-2026/XB_0084Course Objective
At the end of the course, the students will:understand the legal, ethical, and societal implications of AI developments.be able to write informed opinions about AI legislation.be trained in acquiring a set of complex legal AI-related topics in a restricted period of time.be able to work together as a team to identify not only the technical but also the legal issues with the design of AI systems.be familiar with basic knowledge about the law that applies to AI systems, including the law specifically focusing on AI systems such as the EU AI Act.be able to identify legal risks when designing AI systems.be in a position to design AI systems compliant with the law.be able to use legal rules to protect their AI systems.be able to reflect critically on the impact of legal rules on technology.be able to express their own motivated opinions and contribute to the writing of better legal rules.Course Content
For a long time, lawyers were often seen by software developers as a nuisance, a sentiment perhaps best illustrated by an incident at a Github board meeting around 2010. During the discussion, a lawyer pointed out some legal issues and cautiously began, “I know you don’t want to hear this from a lawyer, but…” Before he could continue, another board member cut him off aggressively: “…then shut the f**- up.” This attitude was common among computer scientists as well. While developers and designers focused on what was possible, lawyers concentrated on what was permissible, often without attempting to understand the potential of the technology. Today, the situation is no longer black and white, but rather gray. Lawyers understand technology and development processes better, and AI specialists understand the need to comply with the law. Technology and the law are no longer systematically seen as adversaries. Against this backdrop, many governments around the world are launching initiatives to regulate AI. Their goal is often to contain high-risk AI while supporting other AI applications. One such example is the European Union’s Artificial Intelligence Act. This course explores how AI is governed by legal rules and standards. By the end of the course, students will have a solid grasp of the EU AI Act, including the constraints it imposes on AI development and its key limitations. This will put them in a unique position to contribute to the development of legally compliant AI systems. Most importantly, students will learn how to design AI systems that not only comply with legal requirements but are also protected by them. In short, they will learn how the law works, and how to work with it.
Teaching Methods
Students will be given five lectures mainly focusing on the European Union’s Artificial Intelligence Act. Teachers will use a Socratic method to create discussions in the classroom. Students will also take part in two seminars to explore the European AI Act from a practical perspective. They will comment on the AI Act and discuss their inputs with other classmates and professors. Students will work in small teams their professor will assign.Method of Assessment
One final on-campus exam will account for 100% of the students’ grade. Further details about the exam format will be provided during the first class. A mock exam will also be made available halfway through the course.Literature
The literature will be made available through CanvasTarget Audience
Bachelor of Artificial Intelligence (year 2)Language of Tuition
- English
Study type
- Bachelor