Opinion

The Digital Social Contract and e-Legitimacy

New institutions are needed to manage both the potential and risks of digital technologies that will reshape our societies and how they are governed.

Privacy

Date Posted

17 Jan 2018

Issue

Issue 18, 30 Jan 2018

The technologies underlying the so-called “Fourth Industrial Revolution” are altering the relations between states and their populations, as well as the role of the civil service in mediating change.

Better policy requires improved evidence: machine-based learning and analysis1 will offer deep unprecedented insights into both collective and individual human behaviour, supplanting humans—who suffer from bounded rationality and cognitive biases—as the primary means of collecting, processing, and interpreting information. Digital public empowerment, big data and artificial intelligence (AI) will challenge every government with an unprecedented degree of cognitive dissonance. This will reshape how political, social and economic incentives, interests and ideas are understood, negotiated, and contested.

Profound new opportunities for governments to engage more effectively with citizens are emerging. At the same time, anxieties arising from this impending change are on a scale that is hard to forecast, as is the likely impact on deeply cherished social values.

 

The Effective e-State

For countries to thrive in the future, adopting new technology is not enough; institutional underpinning will remain critical. All states gather data on their citizens, but how this data is used will depend on the quality of governance. Public organisations will need to manage how the collection, analysis, and use of big data creates heightened risks over the privacy and security of citizen data. 


Digital public empowerment, big data and artificial intelligence (AI) will challenge every government with an unprecedented degree of cognitive dissonance.

States are struggling to respond. The methods of controlling national territory cannot police a non-physical space like the Internet. Its governing body, the Internet Corporation for Assigned Names and Numbers (ICANN), seeks to preserve its “operational stability, reliability, security and global interoperability” in the face of threats to the ‘world-wide’ ideal from government censorship.2 Unauthorised access to digital systems, from data manipulation, forgery or theft, can originate anywhere around the globe. In response, effective democratic states are evolving a digital, non-territorial fourth dimension.

The political imperative of privacy will increasingly place public administration in a contradictory tension between joined-up knowledge and confidentiality. Government agencies have devoted much attention to overcoming silos and creating whole-of-government approaches. For such collaboration to work, participating agencies must share sufficient access to each other’s data. At the same time, the state is trying to enforce personal privacy protection by restricting access in compliance with current fragmented mandates. Privacy, Big Data and AI lie at the shifting and contested intersection of commercial profit, public policy, and cultural attitudes.

 

The Privacy Paradox

AI makes it easier than ever before to learn more about individuals. Recent research suggests that AI can already guess sexual orientation based on photographs of faces more accurately than humans are able to do. AI could therefore be used to classify people without their consent, and perhaps to identify other possible links between publicly disclosed information and phenomena such as political views, psychological conditions or personality traits.

New technologies highlight that cultural attitudes to privacy vary and can often be contradictory. Privacy, as the protection of personally-identifiable data according to fair, moral, legal, and ethical standards, is a social construct.

Governments are being exhorted to deliver better targeted services to citizens. But this use of big data arouses concerns about privacy and ethics. For instance, the use of AI in police profiling to predict people’s behaviour risks may reinforce stereotypes and social exclusion, subverting individual choice and equal opportunities.

People can be cautious about their personal information being collected and stored by government agencies or by corporations. Yet on the other hand, they may readily reveal the same intimate details of their lives on social media posts, blogs, and profiles.

In this, the individual exercises personal choice. However, people are increasingly disclosing, and are required to disclose, personal information over the Internet in order to participate in the modern digital society and economy. Privacy is becoming more complex, negotiated and contested, and is already losing out to technologies that make life easier: mobile smartphones that record our every move, search engines that note our every interest. Citizens collude with this algorithmically driven trend, wanting better customer satisfaction and user experience. Fluid identities in an era of constant technological change have rendered the traditional “right to privacy” an anachronism. 

Citizen trust is frequently tied to the issue of privacy. Regulators are urged to implement stringent data protection to ensure that data is only used for the purpose for which it is collected (the “purpose limitation” principle of contemporary data protection rules). Yet the traditional concept of privacy is increasingly outdated. The rights-based approach to the protection of privacy and personal data assumes that the state, through control over its territories, is able to fulfil its role as duty-bearer. The digital world is proving more difficult to fully police. Data can be easily moved across borders, stolen, or recorded without consent. The “Dark Web” is awash with criminal activity, including the sale of stolen personal data.


Pressures on privacy in the era of digital governance:

  • new technologies, including big data and AI
  • Whole-of-Government data sharing
  • social media: a spurious sense of intimacy and the risk of bias, misinformation and fake news
  • the digital economy
  • hacking, leaks and fraud, facilitated by inter-connectivity and inter-operability

Digital ID systems: attitudes vary

Privacy worries over national ID systems highlight another stark cultural divide.

Read More

Tech solutions to privacy concerns

It is, of course, possible to improve privacy requirements by masking or anonymising the data collected so as not to gather specific information about an individual.

Read More

Towards a Digital Social Contract

Sir Tim Berners-Lee, inventor of the World Wide Web, has called for a digital Magna Carta, and in the US, similar suggestions have been put forward for a digital Bill of Rights. This would constitute a social contract and basis of consent on which a state can build its e-legitimacy.3 Such digital constitutions would set out the fundamental principles governing the Internet, the powers of governments to control it and tax e-commerce, and the rights of e-citizens to communicate freely. In the face of cyberattacks, for instance, having citizens’ prior consent to digital defences, which may involve curtailing or suspending e-rights such as digital privacy, becomes exponentially important.

Research suggests that trust in government and state legitimacy are not principally created by democracy, the rule of law, or the efficiency and effectiveness of government. Instead, trust and legitimacy are the outcomes of “the impartiality of institutions that exercise government authority.”4 If impartial—not just effective—public administration builds trust between the state and citizenry, and stimulates markets, the implication is a digital social contract that should be articulated. It will be based on the negotiation over shared e-democratic principles such as sustainability, accountability, openness, with citizens in active partnership. Leaders will need to articulate and defend the values of digital society which are currently poorly formulated by the state. Some countries have made headway: Singapore’s “Smart Nation” approach articulates a vision of people “empowered by technology to lead meaningful and fulfilled lives”,5 while e-Estonia promotes a global vision for digital identities that complement the analogue nation.


Privacy, Big Data and AI lie at the shifting and contested intersection of commercial profit, public policy, and cultural attitudes.

Should there be a digital social contract?

In the 1950s, following US President Eisenhower’s “Atoms for Peace” address to the General Assembly of the United Nations, the International Atomic Energy Agency was set up to allay fears while harnessing the huge potential generated by civilian nuclear technology.

Read More

A different view of privacy: Social Obligation?

Whereas most countries enforce strict privacy over tax returns, Norway, Sweden and Finland take an opposite approach:

Read More

Conclusion

An increased focus on privacy and tools to prevent the misuse of machine learning as it advances and proliferates will prove futile. Laws and technological protection of privacy will not stop the steady further erosion of privacy. Instead of anxiety, citizens should recognise the inevitable and embrace it. Putting complete faith in convoluted but fallible legislation and regulation of the digital “no man’s land” seems unwise. Governments and citizens need to proceed by ensuring that the post-privacy e-state is a tolerant place, buttressed by effective institutions that build citizens’ trust. 

The effective use of new technology ultimately depends on effective citizens, officials and entrepreneurs. To avoid the exaggerated expectations and excessive disillusionment over disruptive technology, techno-literacy will prove a key skill for the citizen, politician and official alike. The ultimate purpose of technology is not to build smart cities or smart nations, but to foster smart people—an educated citizenry that cherishes a flourishing society and vibrant economy driven by such attributes as creativity, inclusion, gender equity, humility, intellectual curiosity—and empathy for all intelligent life forms, human or robotic. 


This paper is based on personal reflections from the Disruptive Technologies and Public Service Conference that GCPSE organised in September 2017, and benefited from comments provided by Peter Lovelock, Jane Thomason, Graham Teskey, and Anneke Schmider. The views expressed here are the author’s own and do not necessarily represent the views of UNDP.



ABOUT THE AUTHOR

Max Everest-Phillips is Director of the United Nations Development Programme’s (UNDP) Global Centre for Public Service Excellence (GCPSE) in Singapore.


NOTES

  1. William D. Eggers, David Schatsky and Peter Viechnicki, “How Artificial Intelligence Could Transform Government”, Deloitte Insights, April 26, 2017, accessed October 7, 2017, https://dupress.deloitte.com/dup-us-en/focus/cognitive-technologies/artificial-intelligence-government-summary.html.
  2. ICANN, “Security and Stability”, accessed October 7, 2017, https://www.icann.org/resources/pages/security-stability-2013-06-14- en.
  3. Estonia has already created e-residency. The e-resident’s smart card allows non-Estonians access to government services such as company formation, banking, payment processing, and taxation. It is aimed at attracting the digitally-independent.
  4. Bo Rothstein and Jan Teorell, “What Is Quality of Government? A Theory of Impartial Government Institutions”, Governance 21, no. 2 (April 2008): 165–190.
  5. Smart Nation Singapore, “Smart Nation”, accessed October 7, 2017, https://www.smartnation.sg.

Back to Ethos homepage