conference paper

A ROS Framework for Audio-Based Activity Recognition

Abstract

Research on robot perception mostly focuses on visual information analytics. Audio-based perception is mostly based on speech-related information. However, non-verbal information of the audio channel can be equally important in the perception procedure, or at least play a complementary role. This paper presents a framework for audio signal analysis that utilizes the ROS architectural principles. Details on the design and implementation issues of this workflow are described, while classification results are also presented in the context of two use-cases motivated by the task of medical monitoring. The proposed audio analysis framework is provided as an open-source library at github (https://github.com/tyiannak/AUROS)

Similar works

Full text

thumbnail-image

ZENODO

redirect
Last time updated on 04/01/2018

This paper was published in ZENODO.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.