In May 2019 the LSE launched its future strategy LSE 2030, with the following opening statement: “Our strategy lays out the guiding principles and commitments that will help us shape the world’s future…” That is what a good teacher tells their students: that they not only are the future, but that they have the capacity and responsibility to shape the future. In the context of Big Data Ethics this is aptly phrased by Richards & King: “We are building a new digital society, and the values we build or fail to build into our new digital structures will define us.” Algorithms are an integral part of our digital society. The ever growing availability of data in combination with incredible computing power led to today’s success of algorithms. There is, however, also reason for cautiousness and concern. To mention just a few threats:• Decisions based on algorithms and profiles without the one who decides being able to provide an adequate explanation. For instance, people do not get a loan because the algorithm decided so based on the data related to the applicant. Or, parents are visited by social workers because the algorithm determined there is a risk of school drop out of their kids;• The use of biometric data which indelibly connects the individual to their data profiles such as the use of facial recognition software to connect physical appearance to online information;• Mass surveillance by both government and business.Given what algorithms can and might do, we as a society in general, and lawyers in particular, have a responsibility to decide how we want to shape the world we live in. What algorithms we do allow and what not, and in case we allow algorithms, under what conditions?
|Place of Publication||https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3392869|
|Publisher||LSE Law - Policy Briefing Paper|
|Edition||2019, no. 34|
|Media of output||Online|
|Publication status||Published - 23 May 2019|
- Artificial Intelligence, ethics, algorithms, AI, technology, regulatory technology