Abstract
In this dissertation, I address some of the challenges of adaptive learning systems through the design of an assessment model. This model aims to estimate a student’s mastery level while determining the optimal length of the assessment tailored to the individual student. Such a task is not trivial. It raises four basic questions: 1. What has already been accomplished in the realm of models predicting future performance as learners interact with a sequence of exercises?, 2. How can student responses be translated into a mastery metric for a skill?, 3. What is the optimal number of questions to offer, striking a balance between avoiding overwhelming the learner and gathering the maximum amount of information about their mastery level?, and 4. How can one ensure that the assessment stops soon enough for a wide range of students’ performances, including those prone to ‘wheel-spinning’? In this dissertation, I delve into tackling these critical questions.
Original language | English |
---|---|
Qualification | PhD |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 24 May 2024 |
DOIs | |
Publication status | Published - 24 May 2024 |
Keywords
- adaptive assessment
- performance model
- knowledge tracing
- mastery criteria
- stopping policy
- machine learning
- bayesian
- response time
- online education