How we evaluate postgraduate medical e-learning: Systematic review

Robert De Leeuw, Anneloes De Soet, Sabine Van Der Horst, Kieran Walsh, Michiel Westerman, Fedde Scheele

Research output: Contribution to JournalReview articleAcademicpeer-review

Abstract

Background: Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. Objective: This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. Methods: We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. Results: The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne's instructional design, the Heidelberg inventory, Kern's curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. Conclusions: Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.

Original languageEnglish
Article numbere13128
JournalJournal of Medical Internet Research
Volume21
Issue number4
DOIs
Publication statusPublished - 1 Apr 2019

Fingerprint

Medical Electronics
Learning
Information Centers

Keywords

  • Distance education
  • Learning
  • Professional education

Cite this

De Leeuw, R., De Soet, A., Van Der Horst, S., Walsh, K., Westerman, M., & Scheele, F. (2019). How we evaluate postgraduate medical e-learning: Systematic review. Journal of Medical Internet Research, 21(4), [e13128]. https://doi.org/10.2196/13128
De Leeuw, Robert ; De Soet, Anneloes ; Van Der Horst, Sabine ; Walsh, Kieran ; Westerman, Michiel ; Scheele, Fedde. / How we evaluate postgraduate medical e-learning : Systematic review. In: Journal of Medical Internet Research. 2019 ; Vol. 21, No. 4.
@article{69c30d6dcbf44d9995e230f888d089f9,
title = "How we evaluate postgraduate medical e-learning: Systematic review",
abstract = "Background: Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. Objective: This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. Methods: We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. Results: The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4{\%} 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne's instructional design, the Heidelberg inventory, Kern's curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. Conclusions: Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.",
keywords = "Distance education, Learning, Professional education",
author = "{De Leeuw}, Robert and {De Soet}, Anneloes and {Van Der Horst}, Sabine and Kieran Walsh and Michiel Westerman and Fedde Scheele",
year = "2019",
month = "4",
day = "1",
doi = "10.2196/13128",
language = "English",
volume = "21",
journal = "Journal of Medical Internet Research",
issn = "1438-8871",
publisher = "Journal of medical Internet Research",
number = "4",

}

De Leeuw, R, De Soet, A, Van Der Horst, S, Walsh, K, Westerman, M & Scheele, F 2019, 'How we evaluate postgraduate medical e-learning: Systematic review' Journal of Medical Internet Research, vol. 21, no. 4, e13128. https://doi.org/10.2196/13128

How we evaluate postgraduate medical e-learning : Systematic review. / De Leeuw, Robert; De Soet, Anneloes; Van Der Horst, Sabine; Walsh, Kieran; Westerman, Michiel; Scheele, Fedde.

In: Journal of Medical Internet Research, Vol. 21, No. 4, e13128, 01.04.2019.

Research output: Contribution to JournalReview articleAcademicpeer-review

TY - JOUR

T1 - How we evaluate postgraduate medical e-learning

T2 - Systematic review

AU - De Leeuw, Robert

AU - De Soet, Anneloes

AU - Van Der Horst, Sabine

AU - Walsh, Kieran

AU - Westerman, Michiel

AU - Scheele, Fedde

PY - 2019/4/1

Y1 - 2019/4/1

N2 - Background: Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. Objective: This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. Methods: We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. Results: The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne's instructional design, the Heidelberg inventory, Kern's curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. Conclusions: Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.

AB - Background: Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. Objective: This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. Methods: We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. Results: The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne's instructional design, the Heidelberg inventory, Kern's curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. Conclusions: Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.

KW - Distance education

KW - Learning

KW - Professional education

UR - http://www.scopus.com/inward/record.url?scp=85067301951&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067301951&partnerID=8YFLogxK

U2 - 10.2196/13128

DO - 10.2196/13128

M3 - Review article

VL - 21

JO - Journal of Medical Internet Research

JF - Journal of Medical Internet Research

SN - 1438-8871

IS - 4

M1 - e13128

ER -