Vincent Homburg: The responsibility challenge for the digital age
New developments in artificial intelligence will put digital societies to the test. The good news for Estonia? It has the potential to become the world's most responsible digital society — if it chooses to blend can-do pragmatism with responsible innovation, writes Vincent Homburg, ERA chair in e-governance and digital public services at the University of Tartu (TÜ).
If one asks people outside Estonia what they associate Estonia with, chances are that they will mention ubiquitous electronic services and digital governance.
Estonia has made remarkable progress with embracing digital technologies as a core component of the way governments and citizens interact. In the most recent national elections, more than half of voters cast their vote online. The e-Estonia Briefing Center, conveniently located just steps away from Tallinn Airport, welcomes delegations from around the world on a nearly continual basis, immersing government representatives and businesspeople into a delightful world of rogue digital transformation. Clearly, this country has not forgotten to meticulously brand itself as the world's most advanced digital society, and this message has been well received in the rest of the world.
It is highly unlikely that developments will stop here and now. Digital aficionados in government, the ICT industry and academia are rather keen to consider proactive public services that should make citizens' lives easier without any need for beneficiaries to act.
Most likely, the next phase in the digital transformation of government and public service delivery will be the application of artificial intelligence (AI) in all shapes and sizes. In fact, dozens of such applications are already in use in Estonia's public service delivery apparatus — ranging from applications that signal whether police officers should be tasked with traffic regulation at specific locations during specific times of day to algorithms that match job seekers to the right job opportunities. One tantalizing application in this regard currently under development is Bürokratt, a virtual assistant that allows citizens to interact via speech, text or sign language with the government without having to need to specify or even known which public service provider to contact.
The darker side of AI
Estonia is not the only country in the world in which governments employ AI to make life easier and allow for the more efficient delivery of public services. And perhaps sadly, not all experiences necessarily live up to the optimistic expectations one may have for it.
A brief look at three recent examples will illustrate this.
The first example relates to a practice with which we as educators are very familiar: exams.
In 2020, teachers and lecturers all over the world were forced to consider how to organize exams during lockdowns. One solution cropped up in the form of the use of proctoring software that, using one or two cameras, monitors students' facial expressions and eye movements, keystrokes as well as background sounds while they are taking their exams on computers in remote locations, to signal possible cheating and fraud. This technological solution seemed to provide an acceptable fix until students of color worldwide noticed that they had difficulties logging into the proctoring software and, once they managed to log in, their facial expressions were increasingly flagged by the software as suspicious. Obviously the software had more issues monitoring students of color than students with lighter skin.
A second example relates to Robert Julian-Borchak Williams' experiences.
In January 2020, Williams, a Black American man, was contacted by Detroit Police Department and told to come down to the station to be arrested. As he thought he had done nothing wrong, he instead drove home later to the suburbs, where police arrested him in front of his home. Williams was accused of being involved in a shoplifting case that had been captured on security cameras. His arrest later turned out to be the result of a flawed match from a facial recognition algorithm. Later investigations revealed that the facial recognition technology worked relatively well on white men, but the results were less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.
A third example is the use of algorithmic governance in the detection of fraud with childcare benefits schemes in the Netherlands.
In the Netherlands, the Tax and Customs Administration is responsible for the implementation of, among other public services, childcare benefit schemes that provide financial assistance to parents. Following revelations in the press about fraud cases involving Bulgarian families who moved to the Netherlands, applied for benefits and immediately moved away again, it was decided to implement risk detection algorithms to attach risk scores to individuals applying for childcare benefits. The first issue with the machine learning algorithm was that it was extremely accurate in flagging individuals who made the tiniest mistakes in their applications. A second, arguably bigger issue was that machine-learning algorithms heuristically inferred that non-Dutch welfare recipients were more prone to fraud. Later investigations revealed that this inferencing was mainly due to fraud cases that were manually, arbitrarily and piece-by-piece added to the datasets the machine algorithm was tasked to work with. Not only did the application of machine learning in fraud detection exacerbate the scale of initial flaws in the datasets with which the algorithm was working; welfare recipients the algorithm predicted to be potential fraudsters also had their benefits discontinued without any ex-post assessment by human tax officials.
These examples indicate that there is much more to tell about algorithmic governance than the perhaps naively optimistic story that technology — and artificial intelligence in particular — can be used to make citizens' lives easier and transactions with the government more efficient.
Public service delivery is not only about nearly effortlessly paying taxes and swiftly receiving licenses or benefits that have been requested by individual citizens. Public service delivery is also about fighting fraud in welfare schemes that have solidarity as a core value; it is about issuing certificates that signal an individual's level of competence as the basis for their future career; about ensuring that individuals who engage in deviant behaviors are caught and their behaviors corrected — and that those who live by the law are exempted from sanctions.
Public services may impact citizens' negative freedoms and touch upon fairness, justice and equality. Sometimes, in the way we discuss and think about electronic public services, we may overemphasize the "electronic" element of ways in which governments and citizens interact and understate the element of "publicness" of interactions between governments and citizens. In other words: electronic public service delivery, and algorithmic governance in particular, is much more "political" than apparent at first sight.
Vigilance, skepticism as guiding principles
Machine learning algorithms — while they are being designed, and certainly once in operation — pose intriguing questions on how the core democratic values of fairness, equality and justice should be addressed in digital societies the world over, including in Estonia.
Since 2016, these questions have prompted the European Parliament to discuss European artificial intelligence policies. The European Commission has been developing requirements for the admission of artificial intelligence applications to the European internal market, treating AI applications in the same way as regular consumer products and food products. The Council of Europe has chosen a more fundamental perspective and is now drafting a convention on the protection of human rights, democracy and the rule of law in relation to the application of AI. These initiatives are now binding or will in the near future require more careful consideration of what AI applications are deemed acceptable and admissible, and which applications should be restricted or even banned. AI will most likely be subject to additional regulation with respect to data quality requirements and norms requiring automated critical decisions be assessed by humans.
In short, vigilance and skepticism will be the guiding principles for assessing acceptable uses of artificial intelligence.
Plea for responsible pragmatism
Estonia has gained a reputation for its pragmatic approaches to developing digital innovations. It is tempting to dismiss emerging AI skepticism and regulatory initiatives as being at odds with Estonia's digital transformation achievements and its brand image of being the world's most advanced digital society. There is, however, ample reason to be optimistic.
A blend of can-do pragmatism and the acknowledgement of the politics of algorithmic governance can showcase what daring yet responsible AI looks like and provide a realistic and beckoning perspective on a truly democratic information society. For such responsible pragmatism to emerge, there are a couple of key things of which to take note.
The first thing is to take citizens' expectations and experiences with AI seriously — beyond merely acknowledging them. And in all fairness, in Estonia there is room for improvement here. The second is to have ongoing conversations between developers, policymakers and academics regarding what uses are desired, and what darlings should be killed.
I can't wait to see how visitors to the e-Estonia Briefing Center will be told all there is to know about how Estonia has cracked the code for developing a democratic information society through responsible pragmatism.
This piece was originally written for the University of Tartu magazine Universitas Tartuensis.
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Aili Vahtla