Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief.
LanguageEnglish
Pages1-13
JournalBig Data & Society
Volume3
Issue number2
DOIs
Publication statusPublished - 2016

Fingerprint

natural disaster
Haiti
information flow
Nepal
artifact
exclusion
interpretation
ability
Group

Cite this

@article{44137d7785624d9796b1861c26724636,
title = "Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response",
abstract = "The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief.",
author = "F. Mulder and J.E. Ferguson and P. Groenewegen and F.K. Boersma and J.J. Wolbers",
year = "2016",
doi = "10.1177/2053951716662054",
language = "English",
volume = "3",
pages = "1--13",
journal = "Big Data & Society",
issn = "2053-9517",
publisher = "SAGE Publications",
number = "2",

}

Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response. / Mulder, F.; Ferguson, J.E.; Groenewegen, P.; Boersma, F.K.; Wolbers, J.J.

In: Big Data & Society, Vol. 3, No. 2, 2016, p. 1-13.

Research output: Contribution to JournalArticleAcademicpeer-review

TY - JOUR

T1 - Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response

AU - Mulder, F.

AU - Ferguson, J.E.

AU - Groenewegen, P.

AU - Boersma, F.K.

AU - Wolbers, J.J.

PY - 2016

Y1 - 2016

N2 - The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief.

AB - The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief.

U2 - 10.1177/2053951716662054

DO - 10.1177/2053951716662054

M3 - Article

VL - 3

SP - 1

EP - 13

JO - Big Data & Society

T2 - Big Data & Society

JF - Big Data & Society

SN - 2053-9517

IS - 2

ER -