23,089 research outputs found
Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques
date-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfThis paper presents the results of a study of piano pedalling techniques on the sustain pedal using a newly designed measurement system named Piano Pedaller. The system is comprised of an optical sensor mounted in the piano pedal bearing block and an embedded platform for recording audio and sensor data. This enables recording the pedalling gesture of real players and the piano sound under normal playing conditions. Using the gesture data collected from the system, the task of classifying these data by pedalling technique was undertaken using a Support Vector Machine (SVM). Results can be visualised in an audio based score following application to show pedalling together with the player’s position in the score
End-to-End Multiview Gesture Recognition for Autonomous Car Parking System
The use of hand gestures can be the most intuitive human-machine interaction medium.
The early approaches for hand gesture recognition used device-based methods. These
methods use mechanical or optical sensors attached to a glove or markers, which hinders
the natural human-machine communication. On the other hand, vision-based methods are
not restrictive and allow for a more spontaneous communication without the need of an
intermediary between human and machine. Therefore, vision gesture recognition has been
a popular area of research for the past thirty years.
Hand gesture recognition finds its application in many areas, particularly the automotive
industry where advanced automotive human-machine interface (HMI) designers are
using gesture recognition to improve driver and vehicle safety. However, technology advances
go beyond active/passive safety and into convenience and comfort. In this context,
one of America’s big three automakers has partnered with the Centre of Pattern Analysis
and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding
their product segment through machine learning to provide an increased driver convenience
and comfort with the particular application of hand gesture recognition for autonomous
car parking.
In this thesis, we leverage the state-of-the-art deep learning and optimization techniques
to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system.
We propose a 3DCNN gesture model architecture that we train on a publicly available
hand gesture database. We apply transfer learning methods to fine-tune the pre-trained
gesture model on a custom-made data, which significantly improved the proposed system
performance in real world environment. We adapt the architecture of the end-to-end solution
to expand the state of the art video classifier from a single image as input (fed by
monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we
optimize the proposed solution to work on a limited resources embedded platform (Nvidia
Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the
accuracy robustness and real time functionality of the system
The passive operating mode of the linear optical gesture sensor
The study evaluates the influence of natural light conditions on the
effectiveness of the linear optical gesture sensor, working in the presence of
ambient light only (passive mode). The orientations of the device in reference
to the light source were modified in order to verify the sensitivity of the
sensor. A criterion for the differentiation between two states: "possible
gesture" and "no gesture" was proposed. Additionally, different light
conditions and possible features were investigated, relevant for the decision
of switching between the passive and active modes of the device. The criterion
was evaluated based on the specificity and sensitivity analysis of the binary
ambient light condition classifier. The elaborated classifier predicts ambient
light conditions with the accuracy of 85.15%. Understanding the light
conditions, the hand pose can be detected. The achieved accuracy of the hand
poses classifier trained on the data obtained in the passive mode in favorable
light conditions was 98.76%. It was also shown that the passive operating mode
of the linear gesture sensor reduces the total energy consumption by 93.34%,
resulting in 0.132 mA. It was concluded that optical linear sensor could be
efficiently used in various lighting conditions.Comment: 10 pages, 14 figure
- …