I recently had the pleasure of attending a fantastic seminar on 10 Years of Profiling the European Citizen at Vrije Universiteit Brussel (VUB) which was organised by Mireille Hildebrand, Emre Bayamlıoğlu and her team there. As a result of this seminar I was asked to developed a short provocative article to present among scholars there. As there have been numerous requests for the article that I have received over the last few weeks, I decided to publish it here to ensure that it is accessible to a wider audience sooner rather than later. It will be published as part of an edited volume developed from the seminar with Amsterdam University Press later this year. If you have any comments, questions or suggestions, please do not hesitate to contact me: email@example.com.
The Privacy and Sustainable Computing Lab at Vienna University of Economics and Business and the Europa-University Viadrina are organising a 2-day workshop on:
Algorithmic Management: Designing systems which promote human autonomy
on 20-21 September 2018 at WU Vienna at Welthandelsplatz 1,1020 Vienna, Austria
This workshop is part of a wider research project on Algorithmic Management which studies the structural role of algorithms as forms of management in work environments, where automated digital platforms, such as Amazon, Uber or Clickworker manage the interaction of workers through algorithms. The process of assigning or changing a sequence of individual to be completed tasks is often a fully automated process. This means that algorithms may partly act like a manager, who exercises control over a large number of decentralized workers. The goal of our research project is to investigate the interplay of control and autonomy in a managerial regime, with a specific focus on the food-delivery sector.
Here is the current agenda for the workshop:
Further details about event registration and logistics can be found here: https://www.privacylab.at/event/algorithmic-management-designing-systems-which-promote-human-autonomy/
Blog post written by Axel Polleres and originally posted on http://doingthingswithdata.wordpress.com/
A while ago, together with colleagues Sarah Spiekermann-Hoff, Sabrina Kirrane, and Ben Wagner (who joined in a bit later) we founded a joint research lab, to foster interdisciplinary discussions on how information systems can be build in a private, secure, ethical, value-driven, and eventually more human-centric manner.
We called this lab the Privacy & Sustainable Computing Lab to provide a platform to jointly promote and discuss our research and views and provide a think-tank on how these goals can be achieved, also open to others. Since then, we had many partially heated but first and foremost always very rewarding discussions, to create mutual understanding between researchers coming from an engineering, AI, social sciences, or legal background, on how to address challenges around digitization.
Not surprisingly, the first (and maybe still unresolved) discussion was about how to name the lab. Back then, our research was very much focused on privacy, but we all felt that the topic of societal challenges in the context of the digital age need to be viewed broader. Consequently, one of the first suggestions floating around was “Privacy-aware and Sustainable Computing Lab“, emphasizing on privacy-awareness as one of the main pillars, but with the aim for a broader definition of sustainable computing, which we later shortened to just “Privacy & Sustainable Computing Lab” (for merely length reasons, if I remember correctly, my co-founders to correct me if I am wrong 😉 ).
Towards defining Sustainable Computing
On coming up with a joint definition of the term “Sustainable Computing” back then, I answered in an internal e-mail thread that
Sustainable Computing for me encompasses obviously:
- societally friendly
aspects of [the design and usage of] Computing and Information Systems. In fact, in my personal understanding these three aspects are – in some contexts – potentially conflicting, but resolving and discussing these conflicts is one points why we have founded this lab in first place.
Conflicts add Value(s)
Conflicts can arise for instance from individual well-being being weighed higher than ecologic impacts (or vice versa), or likewise in how much a society as a whole needs to respect and protect the individual’s rights and needs, and in which cases (if at all ever) the common well-being should be put above those individual rights.
These are fundamental questions in neither of which I would by any means consider myself an expert, but where obviously, if you think them into design of systems or into a technology research agenda (which would be more my home-turf), then it both adds value and makes us discuss values as such. Conflicts, that is, making value conflicts explicit and resolving conflicts about the understanding and importance of these values is a necessary part of Sustainable Computing. This is why Sarah suggested the addition of
computing, as part of the definition.
Overall, we haven’t finished the discussion about a crisp definition about what Sustainable Computing is (which is maybe why you don’t find it yet on our Website), but for me this is actually ok: to keep this definition evolving and agile, to keep ready for discussions about it, to keep learning from each other. We’ve also discussed sustainable computing quite extensively in a mission workshop in December 2017, to try to better define what sustainable computing is and how it influences our research.
What I learned mainly is that we as technology experts play a crucial role and carry responsibility in defining Sustainable Computing: by being able to explain limitations of technology but also as advocates of the benefits of technologies, in spite of risks and justified skepticism, and by helping developing technologies to minimize these risks.
Some examples of what falls for me under Sustainable computing:
- Government Transparency through Open Data, and making such Open Data easily accessible to citizens – we try to get closer to this vision in our national research project CommuniData
- Building technical infrastructures to support transparency in personal data processing for data subjects, but also to help companies to fulfill the respective requirements in terms of legal regulations such as the GDPR – we are working on such an infrastructure in our EU H2020 project SPECIAL
- Building standard model processes for value-based, ethical system design, as the IEEE P7000 group does it (with involvement of my colleague Sarah Spiekermann).
- Thinking about how AI can support ethics (instead of fearmongering the risks of AI) – we will shortly publish a special issue on some examples in a forthcoming volume of ACM Transactions on Internet Technologies (TOIT)
- Studying phenomena and social behaviours online with the purpose of detecting and pinpointing biases as for example our colleagues at the Complexity Science Hub Vienna do in their work on Computational Social Sciences, understanding Systemic Risks and Socio-Economic Phenomena
Many more such examples are hopefully coming out of our lab through cross-fertilizing, interdisciplinary research and discussions in the years to come…
After two years of negotiations in the Council of Europe Committee of experts on Internet Intermediaries (MSI-NET) the final documents of the expert group have finally been published. While the negations among the experts and governmental representatives in the group were not without difficulty, the final texts are relatively strong for what are still negotiated texts. Of particularly interest for experts working on the regulation of algorithms and automation is the Study on Algorithms and Human Rights which was drafted by Dr. Ben Wagner, one of the members of the lab and the Rapporteur of the Study.
The study attempts to take a broad approach to the human rights implications of algorithms, looking not just at Privacy but also Freedom of Assembly and Expression or the Right to a Fair trial in the context of the European Convention on Human Rights. While the regulatory responses suggested focus both on transparency and accountability, they also acknowledge that additional standard-setting measures and ethical frameworks will be required in order to ensure that human rights are safeguarded in automated technical systems. Here existing projects at the Lab such as P7000 or SPECIAL can provide an important contribution to the debate and ensure that not just privacy but that all human rights are safeguarded online.
Welcome to the new Privacy and Sustainable Computing Lab blog!
We look forward to having further blog posts listed here in the next few weeks, giving visitors to this website a better insight on what we’re doing. If you have questions about the Lab please don’t hesitate to contact: firstname.lastname@example.org.