TY - GEN
T1 - LOD lab
T2 - 12th International Summer School on Reasoning Web Summer School, RW 2016
AU - Beek, Wouter
AU - Rietveld, Laurens
AU - Ilievski, F.
AU - Schlobach, Stefan
PY - 2017
Y1 - 2017
N2 - With tens if not hundreds of billions of logical statements, the Linked Open Data (LOD) is one of the biggest knowledge bases ever built. As such it is a gigantic source of information for applications in various domains, but also given its size an ideal test-bed for knowledge representation and reasoning, heterogeneous nature, and complexity. However, making use of this unique resource has proven next to impossible in the past due to a number of problems, including data collection, quality, accessibility, scalability, availability and findability. The LOD Laundromat and LOD Lab are recent infrastructures that addresses these problems in a systematic way, by automatically crawling, cleaning, indexing, analysing and republishing data in a unified way. Given a family of simple tools, LOD Lab allows researchers to query, access, analyse and manipulate hundreds of thousands of data documents seamlessly, e.g. facilitating experiments (e.g. for reasoning) over hundreds of thousands of (possibly integrated) datasets based on content and meta-data. This chapter provides the theoretical basis and practical skills required for making ideal use of this large scale experimental platform. First we study the problems that make it so hard to work with Semantic Web data in its current form. We’ll also propose generic solutions and introduce the tools the reader needs to get started with their own experiments on the LOD Cloud.
AB - With tens if not hundreds of billions of logical statements, the Linked Open Data (LOD) is one of the biggest knowledge bases ever built. As such it is a gigantic source of information for applications in various domains, but also given its size an ideal test-bed for knowledge representation and reasoning, heterogeneous nature, and complexity. However, making use of this unique resource has proven next to impossible in the past due to a number of problems, including data collection, quality, accessibility, scalability, availability and findability. The LOD Laundromat and LOD Lab are recent infrastructures that addresses these problems in a systematic way, by automatically crawling, cleaning, indexing, analysing and republishing data in a unified way. Given a family of simple tools, LOD Lab allows researchers to query, access, analyse and manipulate hundreds of thousands of data documents seamlessly, e.g. facilitating experiments (e.g. for reasoning) over hundreds of thousands of (possibly integrated) datasets based on content and meta-data. This chapter provides the theoretical basis and practical skills required for making ideal use of this large scale experimental platform. First we study the problems that make it so hard to work with Semantic Web data in its current form. We’ll also propose generic solutions and introduce the tools the reader needs to get started with their own experiments on the LOD Cloud.
UR - http://www.scopus.com/inward/record.url?scp=85014879851&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85014879851&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-49493-7_4
DO - 10.1007/978-3-319-49493-7_4
M3 - Conference contribution
AN - SCOPUS:85014879851
SN - 9783319494920
VL - 9885 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 124
EP - 155
BT - Reasoning Web: Logical Foundation of Knowledge Graph Construction and Query Answering - 12th International Summer School 2016, Tutorial Lectures
PB - Springer/Verlag
Y2 - 5 September 2016 through 9 September 2016
ER -