23,825 research outputs found
Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory
The Pepper robot has become a widely recognised face for the perceived
potential of social robots to enter our homes and businesses. However, to date,
commercial and research applications of the Pepper have been largely restricted
to roles in which the robot is able to remain stationary. This restriction is
the result of a number of technical limitations, including limited sensing
capabilities, and have as a result, reduced the number of roles in which use of
the robot can be explored. In this paper, we present our approach to solving
these problems, with the intention of opening up new research applications for
the robot. To demonstrate the applicability of our approach, we have framed
this work within the context of providing interactive tours of an open-plan
robotics laboratory.Comment: 8 pages, Submitted to IROS 2018 (2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems), see
https://bitbucket.org/pepper_qut/ for access to the softwar
A Conceptual Framework for Motion Based Music Applications
Imaginary projections are the core of the framework for motion
based music applications presented in this paper. Their design depends
on the space covered by the motion tracking device, but also
on the musical feature involved in the application. They can be considered
a very powerful tool because they allow not only to project
in the virtual environment the image of a traditional acoustic instrument,
but also to express any spatially defined abstract concept.
The system pipeline starts from the musical content and, through a
geometrical interpretation, arrives to its projection in the physical
space. Three case studies involving different motion tracking devices
and different musical concepts will be analyzed. The three
examined applications have been programmed and already tested
by the authors. They aim respectively at musical expressive interaction
(Disembodied Voices), tonal music knowledge (Harmonic
Walk) and XX century music composition (Hand Composer)
Dynamic urban projection mapping
“Dynamic projection mapping” is a variation of the best-known “projection mapping”. It
considers the perceptual analysis of the urban landscape in which the video projection and the
observer’s displacement speed are hypothesized. This latter, in particular, is variable and may
depend on factors not directly controllable by the driver (slowdowns due to accidents, rallies, etc.).
This speed can be supported and controlled by a number of traffic flow measurement systems. These
data are available on the internet, like Google Maps APIs and/or speed sensors located close to the
point of interest. The content of projection becomes dynamic and varies according to how the
observer perceives the vehicle: slow, medium, fast
RGB-D datasets using microsoft kinect or similar sensors: a survey
RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms
MetaSpace II: Object and full-body tracking for interaction and navigation in social VR
MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple
users can not only see and hear but also interact with each other, grasp and
manipulate objects, walk around in space, and get tactile feedback. MS2 allows
walking in physical space by tracking each user's skeleton in real-time and
allows users to feel by employing passive haptics i.e., when users touch or
manipulate an object in the virtual world, they simultaneously also touch or
manipulate a corresponding object in the physical world. To enable these
elements in VR, MS2 creates a correspondence in spatial layout and object
placement by building the virtual world on top of a 3D scan of the real world.
Through the association between the real and virtual world, users are able to
walk freely while wearing a head-mounted device, avoid obstacles like walls and
furniture, and interact with people and objects. Most current virtual reality
(VR) environments are designed for a single user experience where interactions
with virtual objects are mediated by hand-held input devices or hand gestures.
Additionally, users are only shown a representation of their hands in VR
floating in front of the camera as seen from a first person perspective. We
believe, representing each user as a full-body avatar that is controlled by
natural movements of the person in the real world (see Figure 1d), can greatly
enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video:
http://living.media.mit.edu/projects/metaspace-ii
- …