4 research outputs found
Robust audio sensing with multi-sound classification
Audio data is a highly rich form of information, often containing patterns with unique acoustic signatures. In pervasive sensing environments, because of the empowered smart devices, we have witnessed an increasing research interest in sound sensing to detect ambient environment, recognise users' daily activities, and infer their health conditions. However, the main challenge is that the real-world environment often contains multiple sound sources, which can significantly compromise the robustness of the above environment, event, and activity detection applications. In this paper, we explore different approaches in multi-sound classification, and propose a stacked classifier based on the recent advance in deep learning. We evaluate our proposed approach in a comprehensive set of experiments on both sound effect and real-world datasets. The results have demonstrated that our approach can robustly identify each sound category among mixed acoustic signals, without the need of any a priori knowledge about the number and signature of sounds in the mixed signals.Postprin
Acoustic Based Footstep Detection in Pervasive Healthcare
Passive detection of footsteps in domestic settings can allow the development of assistive technologies that can monitor mobility patterns of older adults in their home environment. Acoustic footstep detection is a promising approach for nonintrusive detection of footsteps. So far there has been limited work in developing robust acoustic footstep detection systems that can operate in noisy home environments. In this paper, we propose a novel application of the Attention based Recurrent Deep Neural Network to detect human footsteps in noisy overlapping audio streams. The model is trained on synthetic data which simulates the acoustic scene in a home environment. To evaluate performance, we reproduced two footstep detection models from literature and compared them using the newly developed Polyphonic Sound Detection Scores (PSDS). Our model achieved the highest PSDS and is close to the highest score achieved by generic indoor AED models in DCASE. The proposed system is designed to both detect and track footsteps within a home setting, and to enhance state-of-the-art digital health-care solutions for empowering older adults to live autonomously in their own homes
Recommended from our members
Multi-Dimensional Task Recognition for Human-Robot Teaming
Human-robot teams involve humans and robots collaborating to achieve tasks under various environmental conditions. Successful teaming requires robots to adapt autonomously in real-time to a human teammate's state. An important element of such adaptation is the ability for the robot to infer the tasks performed by their human teammates. Human-robot teams often perform a wide variety of tasks, involving multiple activity components, and may even perform two or more tasks concurrently. A robot’s ability to recognize the human’s composite tasks that occur concurrently is a key requirement for realizing successful collaboration. Existing task recognition algorithms are not viable for human-robot teams, as they only detect tasks from a subset of activity components and rarely detect concurrent, composite tasks. This dissertation developed a multi-dimensional task recognition algorithm capable of detecting concurrent, composite tasks across the cognitive, speech, auditory, visual, gross motor, fine-grained motor, and tactile components by incorporating metrics that are sensitive, versatile, and suitable across human-robot teaming paradigms. The developed algorithm addresses a foundational problem of understanding an individual's task engagement state in human-robot teams operating in dynamic, unstructured environments
Robust audio sensing with multi-sound classification
Audio data is a highly rich form of information, often containing patterns with unique acoustic signatures. In pervasive sensing environments, because of the empowered smart devices, we have witnessed an increasing research interest in sound sensing to detect ambient environment, recognise users' daily activities, and infer their health conditions. However, the main challenge is that the real-world environment often contains multiple sound sources, which can significantly compromise the robustness of the above environment, event, and activity detection applications. In this paper, we explore different approaches in multi-sound classification, and propose a stacked classifier based on the recent advance in deep learning. We evaluate our proposed approach in a comprehensive set of experiments on both sound effect and real-world datasets. The results have demonstrated that our approach can robustly identify each sound category among mixed acoustic signals, without the need of any a priori knowledge about the number and signature of sounds in the mixed signals