Privacy and Trustworthy AI

Trustworthy media technologies

Fraunhofer IDMT is focusing on the development, use and integration of tools that promote privacy, security, robustness, transparency, explainability and fairness within data-driven applications, especially when dealing with media technologies and media content. We aim at providing a technology toolbox that helps our partners and clients to integrate “trust” into their applications and products.

News and upcoming events

 

Award / 11.10.2024

IEEE Best Poster Award

We congratulate Kay Fuhrmeister and his colleagues on winning the Best Poster Award for improving data protection for Federated Learning tasks using Fully Homomorphic Encryption.

 

Project

AVATAR network meeting

The AVATAR project consortium and associated joint research projects meet for the annual joint meeting at the Fraunhofer Forum Berlin.

 

New project / 9.5.2023

Data protection for biosignals

The »NEMO« project is exploring anonymisation techniques, using the example of electroencephalograms (EEG)

Research focus

The balance between trust and data analysis

Privacy Enhancing Technologies (PETs) and Trustworthy AI are critical to protect personal and business-critical information, to address legal requirements, and to promote fairness, transparency, robustness and security in today's data-driven applications and systems.

There are some who believe that trust and utility and data analysis are mutually incompatible, some who believe that regulation alone is sufficient to solve all problems, or vice versa, who believe that regulation is not needed at all. But we need both regulation and innovation, and we should aim at solutions to build trust into data-driven systems and AI. The way to do this is

  • to understand the specific requirements for a given application,
  • to understand the potential trade-offs between utility and trust aspects involved,
  • and to know, use and adapt technologies achieve an optimal trade-off for a given application.

Privacy and trust in the age of AI

As for protecting privacy for data analysis, Privacy-Preserving Data Publishing (PPDP) tools such as data anonymization and pseudonymization that are based on removal or altering of data (e. g. via applying suppression or generalization), have always played a key role and remain important tools for many applications. Moreover, recent years have brought an increasing need to deal with large amounts of data with high dimensionality and complexity, and advanced analysis capabilities due to AI. This has led to the emergence of in Privacy-Preserving AI (PPAI) tools that are meant to deal with amplified risks related to unintended exposure of sensitive data in the context of AI including inference attacks (attacks that aim at analyzing data to gain knowledge about a subject) and model poisoning, ( attacks that manipulate data in order to influence or corrupt the model).

These tools include, for instance, Differential Privacy, a technique that adds noise to the output of computations to protect sensitive data; Homomorphic Encryption, which allows for computations over encrypted data; Secure Multi-party Computation, which allows multiple parties to jointly compute a function over their inputs while keeping inputs private; and Secure Federated Learning, a decentralized machine learning approach where models are trained across multiple participants, ensuring data privacy.

Beyond privacy, further aspects of trustworthy AI that we deal with at Fraunhofer IDMT include

  • security and adversarial robustness: provide resilience of AI tools against deceptive inputs, and to ensure authenticity (this is closely related to topic media forensics)
  • transparency and explainability: making input data and AI processes understandable, thereby fostering trust and effective interaction
  • bias and fairness: reducing sample bias and other biases, while being aware of the various trade-offs involved (the reduction of certain biases may increase other biases); this also includes the interaction between machine and human biases when technologies are applied, e. g. the question on how to reduce filter bubbles effects created by recommendation systems which amplify confirmation bias

How to proceed

Understanding the specific requirements for a given application, selecting and adapting the necessary tools, and systematic evaluation are key to achieve the aforementioned goals. We follow a by-design approach, which means that privacy, security, transparency, fairness, and robustness considerations must be taken into account from the very beginning of the development process, rather than being retroactively applied, thus reducing risks and costs.