2,974 research outputs found
Visual Localisation of Mobile Devices in an Indoor Environment under Network Delay Conditions
Current progresses in home automation and service robotic environment have
highlighted the need to develop interoperability mechanisms that allow a
standard communication between the two systems. During the development of the
DHCompliant protocol, the problem of locating mobile devices in an indoor
environment has been investigated. The communication of the device with the
location service has been carried out to study the time delay that web services
offer in front of the sockets. The importance of obtaining data from real-time
location systems portends that a basic tool for interoperability, such as web
services, can be ineffective in this scenario because of the delays added in
the invocation of services. This paper is focused on introducing a web service
to resolve a coordinates request without any significant delay in comparison
with the sockets
Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
The visual focus of attention (VFOA) has been recognized as a prominent
conversational cue. We are interested in estimating and tracking the VFOAs
associated with multi-party social interactions. We note that in this type of
situations the participants either look at each other or at an object of
interest; therefore their eyes are not always visible. Consequently both gaze
and VFOA estimation cannot be based on eye detection and tracking. We propose a
method that exploits the correlation between eye gaze and head movements. Both
VFOA and gaze are modeled as latent variables in a Bayesian switching
state-space model. The proposed formulation leads to a tractable learning
procedure and to an efficient algorithm that simultaneously tracks gaze and
visual focus. The method is tested and benchmarked using two publicly available
datasets that contain typical multi-party human-robot and human-human
interactions.Comment: 15 pages, 8 figures, 6 table
Recommended from our members
Regulating stepping during fixed-speed and self-paced treadmill walking
textBackground: Treadmill walking should closely simulate overground walking for research validation and optimal skill transfer. Traditional fixed-speed treadmill (FS) walking may not simulate natural walking because of the fixed belt speed and lack of visual cues. Self-paced (SP) treadmill walking, especially feedback controlled SP treadmill walking, enables close-to-real-time belt speed changes with users' speed changes. Different sensitivity levels of SP treadmill feedback determine how fast the treadmill respond to user's speed change. Few studies have examined the differences between FS and SP treadmill walking, or the difference between sensitivity levels of SP treadmills, and their methods were questionable because of averaging kinematics and kinetics parameters, and failing to examine directly treadmill and subjects' speed data. This study compared FS with two SP modes with variation of treadmill speed and user's speed as dependent variables. Method: Thirteen young healthy subjects participated. Subjects walked on a motorized split-belt treadmill under FS, high sensitivity SP (SP-H) and low sensitivity SP (SP-L) conditions at normal walking speed. Root mean square error (RMSE) for subject's pelvis global speed (Vpg), pelvis speed with respect to treadmill speed (Vpt), and treadmill speed (Vtg) data were collected for all trials. Results: Significant condition effects were found between FS and the two SP modes in all RMSE values (p < 0.001). The two sensitivity levels of SP had similar speed patterns. Large subject × condition interaction effects were found for all variables (p < 0.001). Only small subject effects were found. Conclusions: The results of the study reveal different walking patterns between FS and SP. However, the two sensitivity levels failed to differ much. More habituation time may be needed for subjects to learn to optimally respond to the SP algorithm. Future work should include training subjects for more natural responses, applying a feed-forward algorithm, and testing the effect of optic flow on FS and SP speed variation.Kinesiology and Health Educatio
What Will I Do Next? The Intention from Motion Experiment
In computer vision, video-based approaches have been widely explored for the
early classification and the prediction of actions or activities. However, it
remains unclear whether this modality (as compared to 3D kinematics) can still
be reliable for the prediction of human intentions, defined as the overarching
goal embedded in an action sequence. Since the same action can be performed
with different intentions, this problem is more challenging but yet affordable
as proved by quantitative cognitive studies which exploit the 3D kinematics
acquired through motion capture systems. In this paper, we bridge cognitive and
computer vision studies, by demonstrating the effectiveness of video-based
approaches for the prediction of human intentions. Precisely, we propose
Intention from Motion, a new paradigm where, without using any contextual
information, we consider instantaneous grasping motor acts involving a bottle
in order to forecast why the bottle itself has been reached (to pass it or to
place in a box, or to pour or to drink the liquid inside). We process only the
grasping onsets casting intention prediction as a classification framework.
Leveraging on our multimodal acquisition (3D motion capture data and 2D optical
videos), we compare the most commonly used 3D descriptors from cognitive
studies with state-of-the-art video-based techniques. Since the two analyses
achieve an equivalent performance, we demonstrate that computer vision tools
are effective in capturing the kinematics and facing the cognitive problem of
human intention prediction.Comment: 2017 IEEE Conference on Computer Vision and Pattern Recognition
Workshop
- …