As for protecting privacy for data analysis, Privacy-Preserving Data Publishing (PPDP) tools such as data anonymization and pseudonymization that are based on removal or altering of data (e. g. via applying suppression or generalization), have always played a key role and remain important tools for many applications. Moreover, recent years have brought an increasing need to deal with large amounts of data with high dimensionality and complexity, and advanced analysis capabilities due to AI. This has led to the emergence of in Privacy-Preserving AI (PPAI) tools that are meant to deal with amplified risks related to unintended exposure of sensitive data in the context of AI including inference attacks (attacks that aim at analyzing data to gain knowledge about a subject) and model poisoning, ( attacks that manipulate data in order to influence or corrupt the model).
These tools include, for instance, Differential Privacy, a technique that adds noise to the output of computations to protect sensitive data; Homomorphic Encryption, which allows for computations over encrypted data; Secure Multi-party Computation, which allows multiple parties to jointly compute a function over their inputs while keeping inputs private; and Secure Federated Learning, a decentralized machine learning approach where models are trained across multiple participants, ensuring data privacy.
Beyond privacy, further aspects of trustworthy AI that we deal with at Fraunhofer IDMT include
- security and adversarial robustness: provide resilience of AI tools against deceptive inputs, and to ensure authenticity (this is closely related to topic media forensics)
- transparency and explainability: making input data and AI processes understandable, thereby fostering trust and effective interaction
- bias and fairness: reducing sample bias and other biases, while being aware of the various trade-offs involved (the reduction of certain biases may increase other biases); this also includes the interaction between machine and human biases when technologies are applied, e. g. the question on how to reduce filter bubbles effects created by recommendation systems which amplify confirmation bias