Use Case: Acoustic Metaverse for Product Development

Your challenge

Novel acoustic AI methods, e.g., for acoustic event detection (AED), acoustic scene classification (ASC), sound source separation (SSS), or active noise cancellation (ANC), require high-quality data for training and validating the respective application. The data should cover as many of the future application scenarios as possible. The effort to capture, clean, and annotate this data from the real world is very high and poses a major hurdle in fast-moving development cycles. On the other hand, purely synthetic datasets often lead to misfitting and overfitting of AI methods.

Your benefit

We therefore offer composition and real-time reproduction of three-dimensional sound fields that come very close to real acoustic environments (ecological validity). The acoustic content can be created manually via a GUI or script-based and reproduced via spatial loudspeaker playback methods. This gives you the best combination of parameterizable and reproducible acoustic content and real acoustic sound fields to verify your methods for real applications.

Extras

You can either use various multi-channel loudspeaker systems including adaptive room acoustics from IDMT or take advantage of consulting and installation for your own systems.

This might also be interesting for you

Research Topic

Acoustic Simulation for AI Validation and Training

Training and validation of AI systems with simulated data and scenarios