"If the machine starts to fool around, just pull the plug."
In what ways is artificial intelligence (AI) impacting on our society? How can productively handle the population’s reservations towards AI applications? A conversation between the AI expert Fruszina Molnár-Gábor and the new Secretary General of the Volkswagen Foundation, Georg Schütte.
Dr. Molnár-Gábor, everyone’s talking about it. Today, so are we – but what exactly are we talking about? How would you define "artificial intelligence"?
Molnár-Gábor: Actually, of all the conferences I have attended, none have been able to agree on a definition. Some people say that biostatistics was already a form of AI, and therefore AI is not really something new. Personally, I think a good definition of AI is that it can automatically take over decisions and perform actions that in the past could only be carried out by humans.
Schütte: In the Digital Council of the Federal Government, a young entrepreneur tells me: "Whenever we don't know precisely what we're dealing with, we call it AI. When we do know, we give it a name tag. Examples for this are facial recognition and pattern recognition for example of X-ray images in the diagnosis of cancer." I think that characterizes the current debate in society rather well. The word is used in such a general way that it is sometimes no longer tangible.
Is maybe this lack of tangibility the reason why some people have adopted a critical attitude towards AI?
Schütte: It helps if we move the focus away from the people who are affected to what it is that affects them. I mean, instead of talking about "artificial intelligence" in the abstract, we could be more precise and talk about robots in industrial production, for example. This makes us more able to address people’s fears and classify the real dangers that sometimes do exist. Dangers that begin with the fact that robots no longer act in cages but work directly with humans. So the question is how the world of work is changing and how we can ensure the safety of people at work: Concrete answers can then be given to such questions.
Then let’s make it more precise. Dr. Molnár-Gábor, what are you researching at the moment?
Molnár-Gábor: Let me tell you about a project sponsored by the Volkswagen Foundation that I’m currently working on at the Heidelberg Academy of Sciences and Humanities. This project brings together genome biologists, physicians and legal scholars in Berlin and Heidelberg. In Germany, around 60,000 cases of prostate cancer are registered every year. We support the gathering and processing of information on each individual participating in the study so we can develop tailor-made therapies for them. To this end, the biologists and physicians sequence and decode the genome of healthy and sick people. Then they link clinical data with further information on lifestyle in order to learn more about the causes of the disease: trying to predict which patients will respond best to which form of treatment. And, very importantly, we are providing a platform on which patients are able to network with each other. All this can be greatly enhanced with the help of AI.
What can you contribute to the project as a jurist?
Molnár-Gábor: My main focus here is liability law: Of all the people involved, patients, physicians, software developers and biologists, who is responsible for what? For many people, AI is still rather like a black box, which makes it difficult to determine who is responsible when something goes wrong. For example, did the attending physician deviate from the medical standard as result of negligence, or was the error in the algorithm? The goal is to take away people’s feeling of powerlessness towards AI, which is directly caused by unclear terms and classifications.
But is it really just about unclear concepts? Isn’t there a real concern that we are creating a technology that will eventually take over from us humans? This goes far beyond privacy concerns and liability issues.
Molnár-Gábor: My answer to that is to quote a famous saying used by physicists: If the machine starts to fool around, just pull the plug. And in any case, researchers don’t always have to implement everything that is technologically possible.
But isn't that exactly what people are afraid of? That there's always someone somewhere who's prepared to do the unthinkable?
Molnár-Gábor: That may be so, but if there are no concrete application examples, it’s difficult to talk to people about such fears.
Schütte: And it is precisely because we need concrete examples that the Volkswagen Foundation launched a funding initiative which is located at the interface between technology and the natural sciences and fields in the social sciences and humanities, and supports researchers like Dr. Molnár-Gábor. I’m not a psychologist, but nevertheless I dare to say that we certainly cannot take away all of people's primal fears. But if we know more precisely what makes them afraid, then we can deal with it better and the search for solutions can begin.
Perhaps it is less to do with primal fears and more with a deeply Central European, German angst syndrome?
Molnár-Gábor: For many Chinese people, AI applications such as social scoring actually create more trust. And in the US, people are quite prepared to take greater risks in exchange for the benefits they expect from AI, according to the motto: As long as the harmfulness is not proven, we will not intervene at all. In Europe, we always want to consider as many contingencies as possible beforehand. This is reflected in the legal regulations.
Experts say that the EU is being left behind by other regions of the world when it comes to AI.
Schütte: We play in the top league in basic AI research, as the high number of scientific publications shows. We are at the forefront in many fields, for example in the use of artificial intelligence in biology or the life sciences. But one thing we always have to ask ourselves: Are we setting the right priorities? For example, according to the German Commission of Experts for Research and Innovation, China and the USA are promoting massive research into neuronal artificial intelligence. The experts estimate that in this particular field we are actually about five years behind. In Germany and Europe we are weak in innovation, that is at the interface between scientific knowledge and business or social application.
Molnár-Gábor: Fortunately, quite a lot has been going on recently, especially in Germany. Several initiatives have been launched; from regulation to research funding, the provision of additional AI professorships and support for start-ups – initiatives that now need to be bundled, however, before they will trigger new innovations.
Finally, it is often said that the Americans, South Koreans and Chinese are more advanced when it comes to the technology, but that Europe is ahead in developing new ethical positions and legal bases for AI. Sounds rather like a form of self-consolation, doesn’t it.
Molnár-Gábor: I consider this distinction between technological innovations and social aspects to be artificial. Some of the most promising but still unexplored fields are in interactive AI and in so-called "social AI". How do I as a human being communicate with the machine and its algorithms? How can the machine decipher human interaction? These are questions in which ethics can play a decisive role. Of course, it’s easy to be critical and say that ethics doesn’t sell. But I think that is too short a view.
Schütte: But if we want to remain competitive in Germany and Europe, we have to master the technology. Being the better regulators and leaving the value creation of the technology to others cannot be the answer. Maybe we can go a step further and ask: What do the processes, the products, the services that are based on AI have to offer? What qualities must they exhibit in order to be accepted by people in the area of transport, in the world of work, in medical treatment? The scientific disciplines have been moving towards each other for a long time.
Dr. Schütte, you are a communication scientist, Dr. Molnár-Gábor, you are a legal scholar. From your different professional perspectives, what do you say to computer scientists who claim to have more expertise when it comes to AI?
Molnár-Gábor: Of course computer scientists still have the sovereignty over programming. But if we want to find solutions for society, the sovereignty is very quickly broadly distributed between computer science, natural sciences, the humanities and the social sciences. My impression is that most computer scientists are now seeking this interaction, from joint teaching projects and courses of study to interdisciplinary research programs. I have only had good experiences in this respect.
Schütte: Traditionally, German computer science has grown strongly out of applied mathematics, electrical engineering and the engineering sciences. However, the more computer science has triggered global social trends, the more reason we in Germany have to be self-critical and seek to break new ground. In other words, the pressure caused by the problem issues involved has become so great that the disciplines have no choice but to open up to each other. Foundations can support them to a certain extent. The challenges presented by AI make this need for networking particularly clear, as if through a magnifying lens. But of course, we need the same degree of interdisciplinarity in other research fields as well.