Berlin’s interdisciplinary AI contribution
The Weizenbaum Institute for the Networked Society
Interview with Dr. Diana Alina Serbanescu
Team Lead of Research Group Criticality of AI-based Systems
The recently founded Weizenbaum Institute investigates the impact of digitalisation across all levels of society. The Institute aims to develop a comprehensive understanding of these changes through rigorous academic analyses that can ultimately provide informed political and economical strategic solutions.
The institute’s core objective is to conduct high-level, interdisciplinary, and problem-oriented basic research, which at the same time drives application-oriented projects and stimulates the formulation of new research questions. This interdisciplinarity will be implemented by merging all relevant disciplines into a single research program. In doing so, it will provide a more holistic approach to understanding the complex interplay between society and technology. The sharp increase in digitalisation and automation has lead to a central social challenge, how to ensure the public’s democratic participation and self-determination. The institute examines the conditions required for social self-determination in six central areas: self-determined work, digital sovereignty and citizenship, participation, and democracy. Identification and analysis of current developments in the context of these different key points will also set the stage for public discussions on future political, economic, and social paths for action.
INTERVIEW Teaser, full article in PLASMA magazine 5, release in May 2019
PLASMA: How long have you been working for the Weizenbaum Institute in Berlin? What’s your educational background?
Serbanescu: I started working at the Weizenbaum Institute in January 2018 to lead the group of Criticality of AI-Based Systems. At the time I was one of the first employees they hired since the institute was founded quite recently. The kick-off meeting was only two years ago!I earned my Ph.D. in computer science engineering. I worked under the guidance of Prof. Schieferdecker for 8 years at the Fraunhofer Fokus Institute. Once I finished my doctorate degree though I made the unusual decision to move to Scotland and study performing arts at the University of the West of Scotland (UWS). I already had a background in this so it wasn’t completely out of the blue for me. After I completed my studies at UWS and returned to Germany, I was eager to apply my two backgrounds. Seeing the effervescent technological and artistic scenery in Berlin I came to realize that the field of artificial intelligence (AI) was perfect for me.AI is rooted in technology and co-development, both of which are very much influenced by culture. Culture shapes the content of our systems because these are ultimately rooted in human values. I’m lucky enough that I can now explore all these AI topics from a scientific and cultural perspective. It’s fascinating!
PLASMA: How does explainability work within the realm of AI?
Serbanescu: As humans, if we want to create a trusting relationship with another human, we have to explain the way we think. This explainability based trust also occurs in our relationship with AI. The machine has to be able to clearly explain their algorithmic reasoning to us so that we can understand their decision making process. But sometimes this explainability is not possible due to the subsymbolic state of the system, like deep neural networks.
PLASMA: Wait! So this explainability in AI is already happening?
Serbanescu: Yes! In certain applications I’ve seen self-driving cars explain why they stop or why they decide to accelerate. This interfacing with the system is of the utmost importance because of the psychology behind it. It allows us to be aware of what is happening. The challenge lies in how to develop a user-accessible interface for these explainability algorithms that the AI system uses.
This is a very mathematical area of research, but not exclusively. In order to provide more insight from the artistic standpoint, we are currently collaborating with IMAGINARY to create an interactive AI exhibition. The gallery showcases different AI systems along with different explanations for their different reasonings. This way, the audience can play with the AI and give us feedback on whether or not they trusted the machines.
PLASMA: What are your main concerns in relation to AI?
Serbanescu: My main concern is that we need to create systems that are diverse and inclusive but also accessible to everyone. AI is a powerful tool and everyone should be granted equal access to it. Only then can these tools be beneficial for society. Otherwise, all these technologies will do is further exacerbate the inequality gap. We want our work here to help make the world a better place. At the moment it is really hard to predict how these systems are going to evolve and become self aware. Regardless of this, the important questions to keep in mind still remain – where do we want to go with this technology and who will possess it?