TY - JOUR
T1 - Evaluating FAIR maturity through a scalable, automated, community-governed framework
AU - Wilkinson, Mark D.
AU - Dumontier, Michel
AU - Sansone, Susanna Assunta
AU - Bonino da Silva Santos, Luiz Olavo
AU - Prieto, Mario
AU - Batista, Dominique
AU - McQuilton, Peter
AU - Kuhn, Tobias
AU - Rocca-Serra, Philippe
AU - Crosas, Mercѐ
AU - Schultes, Erik
PY - 2019/9/20
Y1 - 2019/9/20
N2 - Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
AB - Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
UR - http://www.scopus.com/inward/record.url?scp=85072522270&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072522270&partnerID=8YFLogxK
U2 - 10.1038/s41597-019-0184-5
DO - 10.1038/s41597-019-0184-5
M3 - Article
C2 - 31541130
SN - 2052-4463
VL - 6
SP - 1
EP - 12
JO - Scientific Data
JF - Scientific Data
IS - 1
M1 - 174
ER -