How the Use of ‘Ethical’ Principles Hijacks Fundamental Freedoms: The Austrian Social Media Guidelines on Journalists’ Behaviour

A guest opinion piece by Eliska Pirkova

The recent draft of the Social Media Guidelines targeting journalists working for the public Austrian Broadcasting Corporation (ORF) is a troubling example of how self-regulatory ethical Codes of Conduct may be abused by those who wish to establish a stricter control over the press and media freedom in the country. Introduced by the ORF managing director Alexander Wrabetz as a result of strong political pressure, the new draft of the ethical guidelines seeks to ensure the objectivity and credibility of the ORF activities on Social Media. Indeed, ethical guidelines are common practice in media regulatory framework across Europe. Their general purpose is already comprised in its title: to guide. They mainly contain ethical principles to be followed by journalists when performing their profession. In other words, they serve as the voice of reason, underlining and protecting the professional integrity of journalism.

But the newly drafted ORF Guidelines threaten precisely what their proponents claim to protect: independence and objectivity. As stipulated in the original wording of the Guidelines from 2012, they should be viewed as recommendations and not as commands. Nonetheless, their latest draft released in June 2018 uses a very different tone. The document creates a shadow of hierarchy by forcing every ORF-journalist to think twice before they share anything on their social media. First, it specifically stipulates that“public statements and comments in social media should be avoided, which are to be interpreted as approval, rejection or evaluation of utterances, sympathy, antipathy, criticism and ‘polemics’ towards political institutions, their representatives or members.”Every single term used in the aforementioned sentence, whether it is ‘antipathy’ or ‘polemics,’ is extremely vague in its core. Such a vagueness enables the inclusion of any critical personal opinion aiming at the current establishment, no matter of how objective, balanced or well-intended the critique may be.

Second, the Guidelines asks journalists to refrain from “public statements and comments in social media that express a biased, one-sided or partisan attitude, support for such statements and initiatives of third parties and participation in such groups, as far as objectivity, impartiality and independence of the ORF is compromised. The corresponding statements of opinion can be made both by direct statements and indirectly by signs of support / rejection such as likes, dislikes, recommendations, retweets or shares.” Here again, the terms such as partisan opinions are very problematic. Does the critique of human rights violations or supporting the groups fighting the climate change qualify as biased? Under this wording, the chilling effect on the right to freedom of expression is inevitable, when journalists may choose to rather self-censor in order to avoid difficulties and further insecurities in their workplace. At the same time, securing the neutrality of the main public broadcaster in the country cannot be exercised by excluding the plurality of expressed opinions. Especially when the neutrality principle seeks to protect the latter.

Media neutrality is necessary for the impartial broadcasting committed to the common good. In other words, it reassures that the misuse of media for any propaganda and other forms of manipulation will not occur. Therefore, in order for media to remain neutral, the diversity of opinions is absolutely essential, as anything else is simply incompatible with the main principles of journalistic work. The primary duty of the press is to monitor and to inform whether the rule of law is in tact and fully respected by the elected government. Due to its great importance in preserving democracy, the protection of the free press is enshrined within the national constitutions as well as enforced by domestic media laws. The freedom of expression is not only about the right of citizens to write or to say whatever they want, but it is mainly about the public to hear and to read what it needs (Joseph Perera & Ors v. Attorney-General). In this vein, the current draft of the Guidelines undermines the core of journalism by its intentionally vague wording and by misusing or rather twisting the concept of media neutrality.

Although not legally binding document, the Guidelines still impose a real threat to democracy. This is the typical example of ethics and soft law self-regulatory measures becoming a gateway for more restrictive regulation of press freedom and media pluralism. Importantly, the non-binding nature of the Guidelines serves as an excuse for policy makers who defend its provisions as merely ethical principles for journalists’ conduct and not the legal obligations per sei, enforced by a state agent. However, in practice, the independent and impartial work of journalists is increasingly jeopardised, as every statement, whether in their personal or professional capacity, is subjected to much stricter self-censorship in order to avoid further obstacles to their work or even an imposition of ‘ethical’ liability for their conduct. If the current draft is adopted as it stands, it will provide for an extra layer of strict control that aims to silence the critique and dissent.

From the fundamental rights perspective, The European Court of Human Rights (ECt.HR) stated on numerous occasions the vital role of the press, being a public watchdog (Goodwin v. the United Kingdom). Freedom of press is instrumental for public to discover and to form opinions of the ideas and attitudes held by their political leaders. At the same time, it provides the politicians with the opportunity to react and comment on the public opinion. Therefore, healthy press freedom is a ‘symptom’ of a functioning democracy. It enables everyone to participate in the free political debate, which is at the very core of the concept of democratic society (Castells v. Spain). When democracy starts fading away, weakening the press freedom is the first sign that has to be taken seriously. It is very difficult to justify why restricting journalists’ behaviour, or more precisely, the political speech on their private Facebook or Twitter accounts should be deemed as necessary in a democratic society or should pursue any legitimate aim. The Constitutional Courts that follow and respect the rule of law could never find such a free speech restriction legitimate. It also opens the question about the future of Austrian medias’ independence, especially when judged against the current government’ ambitious planto transform the national media landscape.

When in 2000, the radical populist right Freedom Party (FPO) and the conservative ÖVP formed the ruling coalition, the Austrian government was shunned by European countries and threatened with EU sanctions. But today’s atmosphere in Europe is very different. Authoritative and populist regimes openly undermining democratic governance are a new normal. Under such circumstances, human rights of all of us are in danger due to a widespread democratic backsliding present in the western countries as much as in the eastern corner of the EU. Without a doubt, journalists and the media outlets have a huge responsibility to impartially inform the public on matters of public interest.  Ethical Codes of Conduct thus play a crucial role in the journalistic work, acknowledging a great responsibility to report accurately, while avoiding prejudice or any potential harm to others. However, when journalists’ freedom of expression is being violated, the right to receive and impart information of all of us is in danger, and so is democracy.  Human Rights and Ethics are two different things. One cannot be misused to unjustifiably restrict the other.

How Moments of Truth change the way we think about Privacy

Esther Görnemann recently presented her work at the Lab as part of the Privacy & Us doctoral consortium in London. Her work provides an important perspective on the crucial role that the individual experience of Moments of Truth plays in understanding how human beings think about privacy and how under which circumstances they start actively protecting it. Here is a brief overview of her current research as well as a short introductory video.

During preliminary interview sessions, a number of internet and smartphone users talked to me about the surprising experience when they realized that personal information had been collected, processed an applied without their knowledge.
In these interviews and in countless furious online reports, users expressed concern about their device, often stating they felt taken by surprise, patronized or spied upon.

 

Some examples:

  • In an interview, a 73-year old man recalled that he was searching for medical treatment of prostate disorders on Google and was immediately confronted with related advertisements on the websites he visited subsequently. Some days later, he also started to receive email spam related to his search. He said “I felt appalled and spied upon” and ever since had begun to consider whether the search he was about to conduct might contain information he would rather keep for himself.

 

  • A Moment of Truth that made headlines in international news outlets was the story of Danielle from Portland who in early 2018 contacted a local TV station and reported that her Amazon Echo had recorded a private conversation between her and her husband and had sent it to a random person of the couple’s contact list who immediately called the couple back, to tell them what he had received. The couple turned to Amazon’s customer service, but the company was not immediately able to explain the incident. When she called the TV station, Danielle expressed her feelings: “I felt invaded. A total privacy invasion. I’m never plugging that device in again, because I can’t trust it.” While Amazon later explained the incident, saying the Echo mistakenly picked up several words from the conversation and interpreted them as a series of commands to record and send the audio, Danielle still claims the device had not prompted any confirmation or question.  

 

  • An interview participant recalled how he coincidently revealed that his smartphone photo gallery was automatically synchronized with the cloud service Dropbox. He described his reaction with the words “Dropbox automatically uploaded all my pictures in the cloud. It’s like stealing! […] Since then I’m wary. And for sure I will never use Dropbox again.”

Drawing from philosophical and sociological theories, this research project conceptualizes Moments of Truth as the event in which the arrival of new information results in a new interpretation of reality and a fundamental change of perceived alternatives of behavioural responses.

The notion of control or agency is one of several influential factors that mobilizes people and is key to understand reactions to Moments of Truth.

The goal of my research is to construct a model to predict subjects’ affective and behavioural responses to Moments of Truth. A central question is why some people display an increased motivation to protest and claim their rights, convince others, adapt usage patterns and take protective measures. Currently, I am looking at the central role that the perception of illegitimate inequality and the emotional state of anger play in mobilizing people to actively protect their privacy.

https://www.youtube.com/watch?v=jkq5TukhEu4

 

Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?

I recently had the pleasure of attending a fantastic seminar on 10 Years of Profiling the European Citizen at Vrije Universiteit Brussel (VUB) which was organised by Mireille Hildebrand, Emre Bayamlıoğlu and her team there. As a result of this seminar I was asked to developed a short provocative article to present among scholars there. As there have been numerous requests for the article that I have received over the last few weeks, I decided to publish it here to ensure that it is accessible to a wider audience sooner rather than later. It will be published as part of an edited volume developed from the seminar with Amsterdam University Press later this year. If you have any comments, questions or suggestions, please do not hesitate to contact me: ben.wagner@wu.ac.at.

Ben_Wagner_Ethics as an Escape from Regulation_2018_BW9

Workshop: Algorithmic Management: Designing systems which promote human autonomy

The Privacy and Sustainable Computing Lab at Vienna University of Economics and Business and the Europa-University Viadrina are organising a 2-day workshop on:

Algorithmic Management: Designing systems which promote human autonomy
on 20-21 September 2018 at WU Vienna at Welthandelsplatz 1,1020 Vienna, Austria

This workshop is part of a wider research project on Algorithmic Management which studies the structural role of algorithms as forms of management in work environments, where automated digital platforms, such as Amazon, Uber or Clickworker manage the interaction of workers through algorithms. The process of assigning or changing a sequence of individual to be completed tasks is often a fully automated process. This means that algorithms may partly act like a manager, who exercises control over a large number of decentralized workers. The goal of our research project is to investigate the interplay of control and autonomy in a managerial regime, with a specific focus on the food-delivery sector.

Here is the current agenda for the workshop:

Further details about event registration and logistics can be found here: https://www.privacylab.at/event/algorithmic-management-designing-systems-which-promote-human-autonomy/ 

Managing security under the GDPR profoundly

 

 

An interview with  Dr. Alexander Novotny:

The EU General Data Protection Regulation (GDPR) requires organizations to stringently secure personal data. Since penalties under the GDPR loom large, organizations feel uncertain about how to deal with securing personal data processing activities. The Privacy and Sustainable Computing Lab has interviewed the security and privacy expert Dr. Alexander Novotny on how organizations shall address security for processing personal data:

 

 

 

 

 

Under the GDPR, organizations using personal data will have stringent obligations to secure the processing of personal data. How can organizations meet this challenge?

Organization’s security obligations while processing personal data are regulated under Article 32 of the EU General Data Protection Regulation. Security is primarily the data controller’s responsibility. The data controller is the organization who determines the purposes and means of the processing of personal data. To ensure appropriate security, controllers and processors of personal data have to take technical and organizational measures, the so called “TOMs”. Which security measures are appropriate depends on the state of the art and the costs of implementation in relation to the risk. Organizations are only required to implement state of the art technology for securing data processing. The implementation of best available security technologies is neither a requirement in most cases, nor putting security technologies in place that are still not market-available or pre-mature. Also the nature, scope and context of data processing need to be taken into account. For processing dozens of IP addresses in an educational context, for example, different protection is adequate than for processing thousands of IP addresses in a healthcare context. For identifying reasonable TOMs, also the purposes of processing and the risks for the rights and freedoms of natural persons need to be considered.

How can the level of risk for the rights and freedom of natural persons be measured?

The GDPR outlines that the likelihood and the severity of the risk are important factors: the wording in Article 32 of the GDPR points to traditional risk appraisal methods based on probability and impact. These methods are commonly used in IT security already today. Many organizations therefore have classification schemes for likelihood and severity. Often, they categorize these two factors into the classes “low”, “medium” and “high”. Little historic experience in terms of likelihood and severity of security incidents is available. Without such experience, it is very difficult to meaningfully apply rational risk scales such as scales based on monetary values. Also, the ENISA recommends a similar qualitative risk assessment method in its 2017 handbook on the security of personal data processing. What data controllers need to keep in mind is especially the risk for the data subject in the first place and not the organization’s own risk. Thus, organizations have to take a different viewpoint, in particular organizations that have done a risk assessment with regard to an ISO 27001 information security management system already. These organizations need to amend the risk assessment by the viewpoint of the data subject.

What are these so-called TOMs?

Examples on technical and organizational measures are given in Article 32 of the GDPR. The regulation names pseudonymization and encryption of personal data as well as the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services. Organizations need the ability to restore the availability and access to the personal data in the event of a physical or technical incident. Also, a process for regularly testing and evaluating the effectiveness of technical and organizational measures is required. Recital 78 of the GDPR refers to additional measures such as internal policies, for instance. What is remarkable here is that TOMs do not only aim to keep personal data confidential and correct. TOMs also target the availability and access to personal data as well as the resilience of IT systems that are used to process personal data. Availability and resilience of IT infrastructure is one of the traditional IT security goals. But from the viewpoint of data protection it has not been given high priority so far. Hence, organizations have to further integrate their data protection efforts with IT security in order to tackle these requirements set out by the GDPR.

How can a controller be sure that the identified and implemented TOMs are actually appropriate?

This is a question that is often asked by organizations complaining that the guidance provided by the GDPR is overly vague and legal certainty is low. With regard to this question of appropriateness a clash of cultures is often witnessed: on the one hand, technicians responsible for the implementation of the TOMs and, on the other hand, lawyers having an eye on GDPR compliance follow different approaches. Technicians are used to predetermined instructions and requirements. They take a very technological viewpoint and often desire that competent authorities issue specific hard facts lists of TOMs. In contrast, lawyers are used to structurally apply legal criteria for appropriateness and adequacy to real world cases. Instead of relying on predetermined lists of TOMs, organizations are now required to think in terms of what is best for the data subjects and for themselves when it comes to data security. Of course, predefined lists and templates of TOMs can be helpful to enlighten the state of the art. But organizations are required to make up their own minds which TOMs are particularly appropriate for them. This is particularly reflected in Article 32 of the GDPR. It states that the nature, scope and context of data processing need to be taken into account to determine appropriate TOMs.  To increase legal certainty for organizations, they are well advised to write down their particular approach on the selection of TOMs. If organizations comprehensively document their risk-based reasoning about which TOMs they implement to address the identified risks they will likely be safe in front of the law.

What can we understand under regularly assessing and evaluating the effectiveness of TOMs?
Practically this means that controllers need to operate a data protection management system (DMS). Within the scope of such a DMS, regular audits of the effectiveness of the implemented TOMs need to be conducted. Organizations can integrate the DMS into their existing information security management system. With such integration, they can leverage the continual improvement process that is already in place with established management systems. Also, the DMS  supports the required process of regularly testing and evaluating the effectiveness of TOMs.

About the interviewee:

Dr. Alexander Novotny is an information privacy and security specialist. He has been researching on privacy and data protection since the first proposal of the EU commission on the GDPR in 2012. He works as an information security manager for a large international enterprise based in Austria. He holds certification as a data protection officer, is lecturing on IoT security and advising EU-funded research and innovation projects on digital security and privacy.

 

 

 

2 days to GDPR: Standards and Regulations will always lag behind Technology – We still need them… A Blog by Axel Polleres

In the light of the near coming-into-effect of the European General Data Protection Regulation (GDPR) in 2 days from now, there is a lot of uncertainty involved. In fact, many view the now stricter enforcement of data protection and privacy as a late repair to the harm already done, in the context of recent scandals such as the Facebook/Cambridge Analytica breach, which caused a huge discussion about online privacy over the past month, culminating in Mark Zuckerberg’s testimony in front of the senate.

“I am actually not sure we shouldn’t be regulated” Mark Zuckerberg in a recent BBC interview.

Like for most of us, my first reaction to this statement was a feeling of ridiculousness, that in fact it is already far too late and that while such an incident as the Cambridge Analytica scandal was foreseeable (as for instance indicated by Tim Berners-Lee’s reaction to his Turing award back in 2017 already). So many of us may say or feel that the GDPR is coming too late.

However, we see another effect of regulations and standards than sheer prevention of such things happening: cleaning up after the mess.

(Source: uplodaded by Michael Meding to de.wikipedia)

This is often the role of regulations and also, likewise in a similar way the role of (technology) standards.

Technology standards vs legal regulations – not too different.

Given my own experiences on contributing to the standardisation of a Web Data query language, SPARQL1.1, this was very much our task: cleaning up and aligning diverging implementations of needed additional features which have been implemented in different engines to address user’s needs. Work in standards often involves compromises (also a parallel to legislation), so whenever being confronted with this or that not being perfect in the standard we created, that’s normally the only response I have… we’ll have to fix it in the next version of the standard.

Back to the privacy protection regulation, this is also what will need to happen, we now have a standard, call it “GDPR 1.0”, but it will take a while until its implementors, the member states of the EU, will have collected enough “implementation experiencee” to get through suggestions for improvements.

Over time, hopefully enough such experience will emerge to recollect best practices and effective interpretations of  the parts of the GDPR that still remain highly vague: take for instance, what does it mean that “any information and communi­cation relating to the processing of those personal data be easily accessible and easy to understand” (GDPR, recital 39)

The EU will need do continue to work towards GDPR1.1, i.e. to establish best practices and standards that clarify these uncertainties, and offer workable agreed solutions, ideally based on open standards.

Don’t throw out the baby with the bathtub

Yet, there is a risk: voices are already raising that GDPR will be impossible to execute in its full entirety, single member states try already to implement “softened” interpretations of GDPR (yes, it is indeed my home country…), or ridiculous business model ideas such as GDPRShield, are mushrooming to e.g. exclude European customers entirely, in order to avoid GDPR compliance.

There are three ways the European Union can deal with this risk:

  • Soften GDPR or implement it faintheartedly – not a good idea, IMHO, as any loopholes or exceptions around GDPR sanctions will likely put us de facto back into pre-GDPR state.
  • Stand with GDPR firmly and strive for full implementation of its principles, start working on GDPR1.1 in parallel, that is amending best practices and also technical standards which make GDPR work and help companies to implement it.

In our current EU project SPECIAL, which I will also have the opportunity to present again later this year at MyData2018 (in fact, talking about our ideas for standard formats to support GDPR compliant, interoperable recording of consent and personal data processing), we aim at supporting the latter path. First steps to connect both, GDPR legal implementation and working on technical standard, towards such a “GDPR1.1”, supported by standard formats for interoperability and privacy compliance controls, have been taken in a recent W3C workshop in my home university in Vienna, hosted by our institute a month ago.

Another example: Net Neutrality

As a side note, earlier in this blog, I mentioned the (potentially unintended) detrimental effects that giving up net neutrality could have on democracy and freedom of speech. In my opinion, net neutrality is the next topic we need to think about in terms of regulations in the EU as well; dogmatic rules won’t help. Pure net neutrality is no longer feasible, it’s probably gone and a thing of the past, where data traffic was not an issue of necessity. In fact, regulating the distribution of data traffic may be justifiable by commercial (thanks to Steffen Staab for the link) or by even non-commercial interests. For instance optimizing energy consumption: the tradeoffs need to be wisely weighed against each other and regulated, but again, throwing out the baby with the bathtub, as now potentially happened with the net neutrality repeal in the US should be avoided.

Javier D. Fernández – Green Big Data

I have a MSc and a PhD degree in Computer Science, and it’s sad (but honest) to say that in all my academic and professional career the word “privacy” was hardly mentioned. We do learn about “security” but as a mere non-functional requirement, as it is called. Don’t get me wrong, I do care about privacy and I envision a future where “ethical systems” are the rule and no longer the exception, but when people suggest, promote or ask for privacy-by-design systems, one should also understand that we engineers (at least my generation) are mostly not yet privacy-by-design educated.

That’s why, caring about privacy, I like it so much to read diverse theories and manifestos providing general principles to come up with ethical, responsible and sustainable designs for our systems, in particular where personal Big Data (and all the variants, i.e. Data Science) is involved. The Copenhague letter (promoting open humanity-centered designs to serve society), the Responsible Data Science principles (fairness, accuracy, confidentiality, and transparency) and the Ethical Design Manifesto (focused on maximizing human rights and human experience and respect human effort) are good examples, to name but a few.

Acknowledging that these are inspiring works, an engineer might find the aforementioned principles a bit too general to serve as an everyday reference guide for practitioners. In fact, one could argue that they are deliberately open for interpretation, in order to adapt them to each particular use case: they point to the goal(s) and some intermediate stones (i.e. openess or decentralization), while the work of filling up all the gaps is by no means trivial.

Digging a bit to find more fine-grained principles, I thought of the concept of Green Big Data, to refer to Big Data made and use in a “green”, healthy fashion, i.e, being human-centered, ethical, sustainable and valuable for the society. Interestingly, the closest reference for such term was a highly cited article from 2003 regarding “green engineering” [1]. In this article, Anastas and Zimmerman inspected 12 principles to serve as a “framework for scientists and engineers to engage in when designing new materials, products, processes, and systems that are benign to human health and the environment”.

Inspired by the 12 principles of green engineering, I started an exercise to map such principles to my idea of Green Big Data. This map is by no means complete, and still subject to interpretation and discussion. Ben Wagner and my colleagues at the Privacy & Sustainable Computing Lab provided valuable feedback and encouraged me to share these principles with the community in order to start a discussion openly and widely. As an example, Axel Polleres already pointed out that “green” is interpreted here as mostly covering the privacy-aware aspect of sustainable computing, but other concepts such as “transparency-aware” (make data easy to consume) or “environmentally-aware” (avoid wasting energy by letting people run the same stuff over and over again) could be further developed.

You can find the Green Big Data principles below, looking forward for your thoughts!

12 Principles of Green Engineering

12 Principles of Green Big Data

Related topics

Principle 1

Designers need to strive to ensure that all material and energy inputs and outputs are as inherently non-hazardous as possible.

Big Data inputs, outputs and algorithms should be designed to minimize exposing persons to risk.

Security, privacy, data leaks, fairness, confidentiality, human-centric

Principle 2

It is better to prevent waste than to treat or clean up waste after it is formed.

Design proactive strategies to minimize, prevent, detect and contain personal data leaks and misuse.

Security, privacy, accountability, transparency

Principle 3

Separation and purification operations should be designed to minimize energy consumption and materials use.

Design distributed and energy-efficient systems and algorithms that require as little personal data as possible, favoring anonymous and personal-independent processing.

Distribution, anonymity, sustainability

Principle 4

Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency.

Use the full capabilities of existing resources and monitor that it serves the needs of individuals and the society in general.

Sustainability, human-centric, societal challenges, accuracy

Principle 5

Products, processes, and systems should be “output pulled” rather than “input pushed” through the use of energy and materials.

Design systems and algorithms to be versatile, flexible and extensible, independently of the scale of the personal data input.

Sustainability,

scalability

Principle 6

Embedded entropy and complexity must be viewed as an investment when making design choices on recycle, reuse, or beneficial disposition.

Treat personal data as a first-class but hazardous citizen, with extreme precautions in third-party personal data reuse, sharing and disposal.

Privacy, confidentiality, human-centric

Principle 7

Targeted durability, not immortality, should be a design goal.

Define the “intended lifespan” of the system, algorithms and involved data, and design them to be transparent by subjects, who control their data.

Transparency, openness, right to amend and to be forgotten,

human-centric

Principle 8

Design for unnecessary capacity or capability (e.g., “one size fits all”) solutions should be considered a design flaw.

Analyze the expected system/algorithm load and design it to meet the needs and minimize the excess.

Sustainability, scalability, data leaks

Principle 9

Material diversity in multicomponent products should be minimized to promote disassembly and value retention.

Data and system integration must be carefully designed to avoid further personal data risks.

Integration, confidentiality, cross-correlation of personal data

Principle 10

Design of products, processes, and systems must include integration and interconnectivity with available energy and materials flows.

Design open and interoperable systems to leverage the full potential of existing systems and data, while maximizing transparency for data subjects.

Integration, openness

Interoperability, transparency

Principle 11

Products, processes, and systems should be designed for performance in a commercial “afterlife”.

Design modularly for the potential system and data obsolescence, maximizing reuse.

Sustainability, Obsolescence

Principle 12

Material and energy inputs should be renewable rather than depleting.

Prefer data, systems and algorithms that are

open, well-maintained and sustainable in the long term.

Integration, openness

Interoperability, sustainability

 

[1] Anastas, P. & Zimmerman, J. 2003. Design through the 12 principles of green engineering. Environmental Science and Technology 37(5):94A–101A

Axel Polleres: What is “Sustainable Computing”?

Blog post written by Axel Polleres and originally posted on http://doingthingswithdata.wordpress.com/

A while ago, together with colleagues Sarah Spiekermann-Hoff, Sabrina Kirrane, and Ben Wagner (who joined in a bit later) we founded a joint research lab, to foster interdisciplinary discussions on how information systems can be build in a private, secure, ethical, value-driven, and eventually more human-centric manner.

We called this lab the Privacy & Sustainable Computing Lab to provide a platform to jointly promote and discuss our research and views and provide a think-tank on how these goals can be achieved, also open to others. Since then, we had many partially heated but first and foremost always very rewarding discussions, to create mutual understanding between researchers coming from an engineering, AI, social sciences, or legal background, on how to address challenges around digitization.

Not surprisingly, the first (and maybe still unresolved) discussion was about how to name the lab. Back then, our research was very much focused on privacy, but we all felt that the topic of societal challenges in the context of the digital age need to be viewed broader. Consequently, one of the first suggestions floating around was “Privacy-aware and Sustainable Computing Lab“, emphasizing on privacy-awareness as one of the main pillars, but with the aim for a broader definition of sustainable computing, which we later shortened to just “Privacy & Sustainable Computing Lab” (for merely length reasons, if I remember correctly, my co-founders to correct me if I am wrong 😉 ).

Towards defining Sustainable Computing

On coming up with a joint definition of the term “Sustainable Computing” back then, I answered in an internal e-mail thread that

Sustainable Computing for me encompasses obviously: 

  1. human-friendly 
  2. ecologically-friendly
  3. societally friendly 

aspects of [the design and usage of] Computing and Information Systems. In fact, in my personal understanding these three aspects are – in some contexts – potentially conflicting, but resolving and discussing these conflicts is  one points why we have founded this lab in first place.

Conflicts add Value(s)

Conflicts can arise for instance from individual well-being being weighed higher than ecologic impacts (or vice versa), or likewise in how much a society as a whole needs to respect and protect the individual’s rights and needs, and in which cases (if at all ever) the common well-being should be put above those individual rights.

These are fundamental questions in neither of which I would by any means consider myself an expert, but where obviously, if you think them into design of systems or into a technology research agenda (which would be more my home-turf), then it both adds value and makes us discuss values as such. Conflicts, that is, making value conflicts explicit and resolving conflicts about the understanding and importance of these values is a necessary  part of Sustainable Computing. This is why Sarah suggested the addition of

4. value-based

computing, as part of the definition.

Sabrina added, that although sustainable computing is not mentioned the ideas herein, the notion of Sustainable Computing resonates well with what was postulated in the Copenhagen Letter.

Overall, we haven’t finished the discussion about a crisp definition about what Sustainable Computing is (which is maybe why you don’t find it yet on our Website), but for me this is actually ok: to keep this definition evolving and agile, to keep ready for discussions about it, to keep learning from each other. We’ve also discussed sustainable computing quite extensively in a mission workshop in December 2017, to try to better define what sustainable computing is and how it influences our research.

What I learned mainly is that we as technology experts play a crucial role and carry responsibility in defining Sustainable Computing: by being able to explain limitations of technology but also as advocates of the benefits of technologies, in spite of risks and justified skepticism, and by helping developing technologies to minimize these risks.

Some Examples

Some examples of what falls for me under Sustainable computing:

  • Government Transparency through Open Data, and making such Open Data easily accessible to citizens – we try to get closer to this vision in our national research project CommuniData
  • Building technical infrastructures to support transparency in personal data processing for data subjects, but also to help companies to fulfill the respective requirements in terms of legal regulations such as the GDPR – we are working on such an infrastructure in our EU H2020 project SPECIAL
  • Building standard model processes for value-based, ethical system design, as the IEEE P7000 group does it (with involvement of my colleague Sarah Spiekermann).
  • Thinking about how AI can support ethics (instead of fearmongering the risks of AI) – we will shortly publish a special issue on some examples in a forthcoming volume of ACM Transactions on Internet Technologies (TOIT)
  • Studying phenomena and social behaviours online with the purpose of detecting and pinpointing biases as for example our colleagues at the Complexity Science Hub Vienna do in their work on Computational Social Sciences, understanding Systemic Risks and Socio-Economic Phenomena

Many more such examples are hopefully coming out of our lab through cross-fertilizing, interdisciplinary research and discussions in the years to come…

 

Let’s Switch! Some Simple Steps for Privacy-Activism on the Ground

by Sarah Spiekermann, Professor of Business Informatics & Author,

Vienna University of Economics and Business, Austria

Being an “activist” sounds like the next big hack in order to change society for the better; important work done by really smart and courageous people. But I wonder whether these high standards for activism suffice to really change things on the ground. I think we need more: We need activism on the ground.

What is activism on the ground?

By activism on the ground I mean all of us need to be involved: anyone who consumes products and services. Anyone who currently does not engage in any of those “rational choices” that economists ascribe to us. Lets become rational! Me, you, we all can become activists on the ground and make markets move OUR way. How? By switching! Switching from the products and services that we currently buy and use, where we feel that the companies who provide us with these services don’t deserve our money or attention or – most importantly – any information about your private life.

For the digital service world I have started to think about how to switch for quite some time. And in November last year I started a project with my Master Class in Privacy & Security at Vienna University of Business and Economics: We went out and tested the market leading Internet Services that most of us use. We looked into their privacy policies and checked to what extent they give us fair control over our data or – in contrast – hide important information from us. We benchmarked the market leaders with their privacy-friendly competitors. We looked at their privacy defaults and the information and decision control they give us over our data. To check whether switching to a privacy-friendly alternative is a realistic option. We also compared all services’ user experience (nothing is worse than functional but unusable security…). And guess what? Ethical machines are indeed out there.

So why not switch?

Here is the free benchmark study for download that gives you the overview.

Switching your messenger services

For the messenger world, I can personally recommend Signal, which works just as well as WhatsApp does; only that it is blue instead of green. I actually think that WhatsApp does not deserve to be green, because the company shares our contact network information with anyone interested in buying it. My students found that Signal’s privacy design is not quite as good as Wickr Me. I must admit that I had some trouble using Signal on my new GSMK Cryptophone where I obviously reject the idea of installing GooglePlay; but for normal phones Signal works just fine.

Switching your social network

When it comes to social networks, I quit Facebook long ago. I thought the content got a bit boring in these past 4-5 years as people have started to become more cautious in posting their really interesting stuff. I am on Twitter and find it really cool, but the company’s privacy settings and controls are not good. We did not test for Twitter addictiveness …

I signed up with diaspora* which I have known for a long time, because its architecture and early set-up was done by colleagues in the academic community. It is building on a peer-to-peer infrastructure and hence possesses the architecture of choice for a privacy-friendly social network. Not surprisingly, my students found it really good in terms of privacy.  I am not fully done with testing it myself. I certainly hate the name “diaspora”, which is associated with displacement from your homeland. The name signals too much negativity for a service that is actually meant to be a save haven. But other than that I think we should support it more. Interesting enough my students also benchmarked Ello, that is really a social network for artists by now. But as Joseph Beuys already famously proclaimed “Everyone is an artists”, right? I really support this idea! And since their privacy settings are ok (just minor default issues…), this is also an alternative for creative social nomads to start afresh.

Switching your maps service

HERE WeGo is my absolute favorite when it comes to a location service. And this bias has a LONG history, because I already knew the guys who build the service in its earliest versions back then in Berlin (at the time the company was called Gate5). Many of this service’s founding fathers were also members of the Chaos Computer Club. And guess what: when hackers build for themselves, they build really well.

For good reasons my students argue that OSMAND is a great company as well. Especially their decisional data control seems awesome. No matter what you do: Don’t waste your time throwing your location data into the capitalist hands of Google and Apple. Get rid of them! And Maps.me and Waze are not any better according to our benchmark. Location services that don’t get privacy right are the worst we can carry around with us, because letting anyone know where we are at any point in time is really stupid. If you don’t switch for the sake of privacy, switch for the sake of activism.

Switching E-Mail services

I remember when a few of my friends started to be beta-users of gmail. Everyone wanted to have an account. But ever since Google decided to not only scan all our e-mails for advertising purposes but also combine all this knowledge with everything else we do with them (including search, YouTube, etc.) As a result I turned away from the company. I do not even search with Google anymore, but use Startpage as a very good alternative.

That said, gmail is really not the only online mail provider that scans all you write and exchange with others. As soon as you handle your e-mail in the cloud with free providers you must kind of expect that this is the case. My students therefore recommend to switch to Runbox. It is a pay-for e-mail service, but the price is really affordable starting with € 1,35 per month with the smallest package and below € 5 for a really comfortable one. Also: Runbox is a hydropowered e-mail service. So you also do something good for the environment supporting them. An alternative to Runbox is Tutanota. Its usability was rated a bit weaker in comparison to Runbox, but it is available for free.

Switching Calender Systems

Calendars are next to our physical locations and contact data an important service to care about when it comes to privacy. After all, the calendar tells whether you are at home or not at a certain time. Just imagine an online calendar was hacked and your home broken into while you are not there. These fears were pretty evident in class discussions I had with my students who created the benchmark study and we therefore compared calendar apps as well. All the big service providers are really not what you want to use. Simple came up as the service of choice you can use on your phone; at least if you have an Android operating system. If you do not have the calendar on you phone or no Android, Fruux is the alternative of choice for you.

In conclusion, there are alternatives available and you can make meaningful choices about your privacy. The question is now, will you be willing to do so?

Consent Request

Olha, would you be so kind and introduce yourself and your project?

My name is Olha Drozd. I am a project related research associate at theInstitute of Management Information Systems, working on the SPECIAL (Scalable Policy-aware Linked Data Architecture For Privacy,Transparency and Compliance) project a Research and Innovation Actionfunded under the H2020-ICT-2016-1 Big Data PPP call (http://specialprivacy.eu/). At the moment, together with my colleagues,I am working on the development of the user interface (UI) for theconsent request that will be integrated into the privacy dashboard.

Would you please explain the privacy dashboard?

With the help of the privacy dashboard users would be able to access the information about what data is/was processed about them, what is/was the purpose for the data processing, and what data processors are/were involved. The users would also be able to request correction and erasure of the data, review the consent they gave for the data processing and withdraw that consent.

We have two ideas of how this dashboard could be implemented:

  1. Every company could have their own privacy dashboard installed on their infrastructure.
  2. The privacy dashboard could be a trusted intermediary between a company and a user. In that case we would have different companies that are represented in a single dashboard.

As I mentioned in the beginning, I am concentrating on the development of different versions of UI for the consent request that could be integrated into the dashboard. Our plan is to test multiple UIs with the help of user studies to identify better suitable UIs for different contexts. At the moment we are planning to develop two UIs for the consent request.

Olha, would you please tell us more about the consent request?

Before a person starts using an online service he/she should be informed about:

  • What data is processed by the service?
  • How is the data processed?
  • What is the purpose for the processing?
  • Is the data shared and with whom?
  • How is the data stored?

All this information is presented in a consent request, because the user has not only to be informed but has to give his/her consent to the processing of his/her data. We are now aiming to create a dynamic consent request, so that users have flexibility and more control over giving consent compared to all-or-nothing approach that is used by companies today. For example, if the person wants to use wearable health tracking device (e.g. for a FitBit watch) but he/she does not want to have an overview of the statistics of all day heart rate but just activity heart rate, then he/she could allow collection/processing of the data just for the purpose of displaying activity heart rate. It should be also possible to show only the relevant information for the specific situation to the user. In order to ensure that the user is not over burdened with consent requests we are planning to group similar requests into categories and ask for consent once per category. Additionally, it should be possible to adjust or revoke the consent at any time.

At the moment, the main issue for the development of the consent request is the amount of information that should be presented to and digested by a user. The general data protection regulation (GDPR) requires that the users should be presented with every detail. For example, not just the company, or the department that processes the information – the users should be able to drill down through the info. In the graph below you can see an overview of the data that should be shown to users in our small exemplifying use case scenario where a person uses health tracking wearable appliance [1]. You can see how much information users have to digest even in this small use case. Maybe for some people this detailed information could be interesting and useful, but if we consider the general public, it is known that people want to immediately use the device or service and not spend an hour reading and selecting what categories of data for what purpose they can allow to be processed. In our user studies we want to test what will happen if we give users all this information.

Olha, you have mentioned that you were palnning to develop two UIs for the consent request. Would you explain the differences between those two?

One is more technical and innovative (in a graph form) and the other one is more traditional (with tabs, like in a browser). We assume that the more traditional UI might work well with older adults and with people who are not so flexible in adapting to change, new styles and new UIs. And the more innovative one could be more popular with young people.

[1] Bonatti P., Kirrane S., Polleres A., Wenning R. (2017) Transparent Personal Data Processing: The Road Ahead. In: Tonetta S., Schoitsch E., Bitsch F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2017. Lecture Notes in Computer Science, vol 10489. Springer, Cham