How do we build trustworthy AI-based Systems? – An interview with KIT Professor Ali Sunyaev

Which economic sectors are likely to benefit the most from the introduction of AI-based Systems, and how is their introduction going to affect us?

The introduction of AI-based systems will for sure have effects on virtually any economic sector – in some cases the effects will be tremendous. In fact, AI-based systems are already transforming several industries today, as we speak. Look at the automotive industry and the on-going shift to semi- or even fully autonomous cars. Some colleagues at KIT are doing genuinely groundbreaking research in this area.

As for our own research, my team and I mainly focus on the healthcare and the Internet industries as two especially promising areas for introducing AI-based systems. For years, the healthcare industry has faced the problem of having to do more with less, even before the current COVID-19 pandemic. AI-based systems can very well support medical professionals by now. There are, for example, AI based-systems that can help identify certain diseases like COVID-19 on X-ray scans or detect COVID-19 by coughing in your phone’s microphone.


Under which condition can we, as a society, take full advantage of AIs potential?

Simply put, we need to be able to trust AI-based systems. Trustworthy AI¹ bases on the idea that trust builds the foundation of societies, economies, and sustainable development and, therefore, that we as a society will only ever be able to realize the full potential of AI if we can establish trust in AI-based systems. Imagine, for example, an AI-based clinical decision support system and suppose nobody, neither the medical experts nor the patients, trust the AI and the diagnoses or treatment recommendations it makes. In that case, it is unlikely that anybody will follow such a system’s recommendations faithfully.²

Currently, there is a lot of debate about how to achieve trustworthy AI-based systems. I think to achieve this goal, AI-based systems especially need to be developed, deployed, and used in a way that they adhere to some foundational ethics principles, like beneficence, non-maleficence, autonomy, justice, and explicability.


Since explicability is a condition for trust, how can we maintain that for autonomously learning and, therefore, difficult to explain, AI-based Systems?

Most of today’s AI-based systems are indeed complex systems that function as black boxes that suffer from opacity and are difficult for humans to understand. Instead of creating more and more of such black-box systems and trying to understand those systems afterward, a central research challenge for the next decade will be to develop AI-based systems where we know how those systems learn and how they come to their recommendations or decisions to begin with.³ Once we achieve this, we will also better understand and maintain explicability for autonomously learning AI-based systems.

Moreover, explicability has multiple facets. It does not only entail the development of explainable and interpretable models but also includes establishing accountability. Although we often still lack proper technical and non-technical means (like legislation) to do this, many research efforts are currently being directed toward this. My team and I, for example, are actively working on making use of distributed ledger technology (DLT) for trustworthy AI.⁴


How can DLT, which is most commonly known for its use in Bitcoins, help to build trust in AI?

 Although DLT is best known for being the foundation of Bitcoin, there are many potential use cases for this technology beyond Bitcoin. DLT is a technology, which is centered around the distribution of trust. In its essence, it seeks to replace our need for trusting a specific third-party with us trusting a network and its underlying algorithms and principles.

Likewise, I see many opportunities for using DLT to create trustworthy AI-based systems. DLT can, for example, serve as a tamper-resistant trail for tracking the flow of data within AI-based systems. This data might then be further analyzed to create explainable AI models. Another exciting area for combining AI with DLT is data access. Trustworthy AI-based systems should be free from biases. However, often we still lack access to diverse-enough data sets necessary for developing truly unbiased AI-based systems. Here, DLT and the economic incentives it enables, could help us create data markets for AI training data that potentially also stimulate minorities to participate and thus lead to more diverse AI training data sets.