Interactive demonstrator for acoustic condition monitoring across locations
The exhibit shows how an AI-based acoustic monitoring system analyzes machine conditions, detects faults and relies on cross-location learning. Identical machines are operating at each of the system's locations and their operating sounds are analyzed by an acoustic AI. The pre-trained models classify three different states.
As errors rarely occur, the amount of data for AI training at a single location is limited. This is where distributed learning (federated learning) comes in: Instead of sharing confidential audio data directly, the AI models exclusively share learned knowledge in terms of model parameters. This improves error detection across all locations without incurring data security risks.
The demonstrator shows an innovative combination of intelligent acoustic condition monitoring and distributed learning - in this example specifically for classifying engine sounds.
Condition monitoring using airborne sound analysis and AI is conceivable for a wide range of applications in industrial production - whether for continuous monitoring of engines and gearboxes or for monitoring individual production steps, such as the welding of battery boxes. Thanks to the optimal selection of acoustic sensors and pre-trained AI models, anomalies and errors can be reliably detected even in noisy industrial environments.
This technology sets new standards for efficient and safe AI-based quality assurance in production - for all locations.