21,889 research outputs found
Piano Crossing - Walking on a Keyboard
Piano Crossing is an interactive art installation which turns a pedestrian crossing marked with white stripes into a piano keyboard so that pedestrians can generate music by walking over it. Matching tones are generated when a pedestrian is over a particular stripe or key. A digital camera is directed at the crossing from above. A special computer vision application was developed that maps the stripes of the pedestrian crossing to piano keys and which detects over which key is the center of gravity of every pedestrian in the image at any given moment. Special black stripes are added to the crossing, which represent also the black piano keys. The application consists of two parts: (1) initialization, where the model of the abstract piano keyboard is mapped to the image of the pedestrian crossing and (2) the detection of pedestrians on the crossing so that musical tones can be generated according to their locations. The art installation Piano crossing was presented to the public for the first time during the 51st Jazz Festival in Ljubljana in July 2010
Recommended from our members
A gesturally controlled improvisation system for piano
This paper was presented at the Live Interfaces conference 2012. Copyright @ 2012 The Authors.This paper presents a gesturally controlled, live-improvisation
system, developed for an experimental pianist and used
during a performance at the 2011 International Conference
on New Interfaces for Musical Expression. We describe
the gesture-recognition architecture used to recognize
the pianist’s real-time gestures, the audio infrastructure
developed specifically for this piece and the core lessons
learned over the process of developing this performance
system
Music Maker – A Camera-based Music Making Tool for Physical Rehabilitation
The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.National Science Foundation (IIS-0308213, IIS-039009, IIS-0093367, P200A01031, EIA-0202067 to M.B.); National Institutes of Health (DC-03663 to E.S.); Boston University (Dudley Allen Sargent Research Fund (to A.L.)
Evaluating rules of interaction for object manipulation in cluttered virtual environments
A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as "the piano mover's problem." This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an "over-the-shoulder" perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments
Speech Development by Imitation
The Double Cone Model (DCM) is a model
of how the brain transforms sensory input to
motor commands through successive stages of
data compression and expansion. We have
tested a subset of the DCM on speech recognition, production and imitation. The experiments show that the DCM is a good candidate
for an artificial speech processing system that
can develop autonomously. We show that the
DCM can learn a repertoire of speech sounds
by listening to speech input. It is also able to
link the individual elements of speech to sequences that can be recognized or reproduced,
thus allowing the system to imitate spoken
language
A Conceptual Framework for Motion Based Music Applications
Imaginary projections are the core of the framework for motion
based music applications presented in this paper. Their design depends
on the space covered by the motion tracking device, but also
on the musical feature involved in the application. They can be considered
a very powerful tool because they allow not only to project
in the virtual environment the image of a traditional acoustic instrument,
but also to express any spatially defined abstract concept.
The system pipeline starts from the musical content and, through a
geometrical interpretation, arrives to its projection in the physical
space. Three case studies involving different motion tracking devices
and different musical concepts will be analyzed. The three
examined applications have been programmed and already tested
by the authors. They aim respectively at musical expressive interaction
(Disembodied Voices), tonal music knowledge (Harmonic
Walk) and XX century music composition (Hand Composer)
- …