What's the scariest prediction from AI

Automating Society Report 2020Automation is also advancing in Germany

The non-governmental organization AlgorithWatch and the Bertelsmann Foundation yesterday published the 2020 edition of the “Automating Society” report, in which they found a strong increase in automated decision-making systems in Europe.

In 16 countries, the authors researched over 100 cases in which algorithms using methods such as machine learning and other statistical models bring about automatic decisions or forecasts on socially relevant issues.

The vast majority of such automatic decision-making systems tend to be detrimental to individuals and the general public. They were introduced in silence, without adequately informing the public about them and generating broad social support for the goals of the programs. On the other hand, positive examples are extremely rare, state the authors, although there are many possible uses that could be of use to the general public and citizens.

No broad social support awaited

Compared to other European countries, Germany had a livelier debate, although the topic rarely made the headlines here, too. One exception is the debate about testing video surveillance with facial recognition at the Südkreuz train station in Berlin. Public pressure initially prevented the widespread use of facial recognition in public spaces.

Nevertheless, the report found a lack of clarity in terms of terminology in Germany, which was both a symptom and a cause of the lack of social debate. Mixing terms such as digitization, artificial intelligence, machine learning and automation dilutes the discussion of specific use cases.

In general, the authors recognize a belief in technology that presents the risks of automated decision-making systems as bugs that need to be cleared. The question of whether a social problem even needs a technical solution is ignored without reflection.

Risk prediction and analysis of asylum seekers

According to the report, automated decision-making systems are also being used more and more frequently at critical points in Germany. A trend that affects all areas of society and government and will be observed even more clearly in the next few years, according to the prognosis.

Already today algorithms are sometimes setting the direction for security authorities, for example. The RADAR-iTE tool, for example, is intended to calculate for the BKA how high the individual risk of violence a person in the militant Salafist spectrum is. The decision as to whether further steps are taken against the analyzed person is left to the employees.

Schools, employers and women's shelters can also use software to forecast the potential for violence against people who are suspected of having a tendency to rampage or domestic violence.

The Federal Office for Migration and Refugees uses extensive analysis tools to check the information from refugees about their country of origin in the asylum procedure. The contents of the cell phones are processed automatically and language software should provide information on whether the dialect in the mother tongue matches the statements.

Abroad, the Federal Foreign Office gets an overview with a program that evaluates public data to identify where international crises could arise.

More man or machine?

The report also cites an example in which a surveillance system could even replace human interaction - with potentially dramatic consequences. It is a prison experiment in North Rhine-Westphalia. Intelligent video surveillance is supposed to prevent suicides there by automatically recording dangerous situations. If the system works reliably, the human checks that are common today every 15 minutes can be dispensed with. They stress some inmates for being sleep deprived. However, the lack of human contact and panoptic surveillance could also exacerbate the risk of suicide, the authors warn.

The central and contact point Cybercrime in North Rhine-Westphalia has developed a program for analyzing depictions of sexual abuse of children with Microsoft. The program analyzes the images so that people have to carry out the stressful activity less often. The algorithm in the cloud is supposed to recognize pornographic content and compare faces from police databases, only afterwards manual police work takes over again.

The police in Bavaria, Baden-Württemberg, Hesse, North Rhine-Westphalia and Lower Saxony have also already experimented with data-driven predictions about break-ins. The technique is generally known as predictive policing.

According to the report, Bavaria has software in operation that Baden-Württemberg has already discontinued, namely the PRECOBS burglary prediction tool. The problem in Stuttgart: too few break-ins and thus too little data. Therefore, a new data-driven system is to be extended to other forms of crime in Baden-Württemberg in order to improve the predictions.

Transparency and supervision

The use of automated decision-making systems in administration is even rarer in Germany. There are individual projects, such as in the Federal Employment Agency, where the relevant data is automatically evaluated in order to calculate social welfare entitlements, or in Hamburg, for the coordination and billing of social services.

This is still a long way from practical applications such as in Estonia. There have been extensive eGovernment services there for a long time, such as automatic child benefit without just filling out a form, or software-controlled allocation of daycare places.

The overall picture of the use of algorithmic decision-making systems in Europe that is presented to the authors of the report brings them to concrete demands. So that negative consequences can be averted by automated decision-making systems, more transparency about the use is needed - especially in the public sector. That would be possible with a public register that lists the systems and binding rules for access to the data.

In addition, there are no binding rules that clarify responsibilities. Algorithms are ultimately created by humans and are “neither neutral nor objective”, according to the authors. The built-in assumptions and beliefs would hold the creators responsible for the decisions made by the “creepy” but “always human” algorithms.

It takes a social debate

For a future in which more and more far-reaching decisions are automated with less direct human influence, it must be ensured that civil society is given the opportunity to criticize. The state should not unilaterally decide on the use of such systems if they can have such far-reaching consequences for fundamental rights, such as video surveillance with facial recognition. According to the report, this should best be banned because the risk of mass surveillance is too great.

Not only experts should be able to deal with the systems. At the points where automation is to be used, the necessary competence must first be created so that the quality of the decisions can be assessed and the promised human control also takes place.

And finally, the public debate should not be wiped off the table with the framing of technology hostility. Political funding for research and business development in the field of automation should only be one side of the coin. Politicians should also encourage a broad public debate when the digital autonomy of the population is affected, so that they can participate in this change.

About the author

Leonard Kamps

Leonard is an intern with us from October to December 2020. He is a communication scientist by training and is studying "Media and Political Communication" at the Free University of Berlin. He also works in the research group "Politics of Digitization" at the Social Science Research Center in Berlin. His favorite topics are surveillance, content and platform regulation and media change, with the question of how the digital transformation is affecting the democratic public. Accessible by email and Twitter.
Published 10/29/2020 at 8:00 p.m.