30,018 research outputs found
3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit
We present 3DTouch, a novel 3D wearable input device worn on the fingertip
for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D
input device that is self-contained, mobile, and universally working across
various 3D platforms. This paper presents a low-cost solution to designing and
implementing such a device. Our approach relies on relative positioning
technique using an optical laser sensor and a 9-DOF inertial measurement unit.
3DTouch is self-contained, and designed to universally work on various 3D
platforms. The device employs touch input for the benefits of passive haptic
feedback, and movement stability. On the other hand, with touch interaction,
3DTouch is conceptually less fatiguing to use over many hours than 3D spatial
input devices. We propose a set of 3D interaction techniques including
selection, translation, and rotation using 3DTouch. An evaluation also
demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for
subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a
whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure
Multisensor-based human detection and tracking for mobile service robots
The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms.
Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments
Indexing, browsing and searching of digital video
Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a “piece ” of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver
MobiFace: A Novel Dataset for Mobile Face Tracking in the Wild
Face tracking serves as the crucial initial step in mobile applications
trying to analyse target faces over time in mobile settings. However, this
problem has received little attention, mainly due to the scarcity of dedicated
face tracking benchmarks. In this work, we introduce MobiFace, the first
dataset for single face tracking in mobile situations. It consists of 80
unedited live-streaming mobile videos captured by 70 different smartphone users
in fully unconstrained environments. Over bounding boxes are manually
labelled. The videos are carefully selected to cover typical smartphone usage.
The videos are also annotated with 14 attributes, including 6 newly proposed
attributes and 8 commonly seen in object tracking. 36 state-of-the-art
trackers, including facial landmark trackers, generic object trackers and
trackers that we have fine-tuned or improved, are evaluated. The results
suggest that mobile face tracking cannot be solved through existing approaches.
In addition, we show that fine-tuning on the MobiFace training data
significantly boosts the performance of deep learning-based trackers,
suggesting that MobiFace captures the unique characteristics of mobile face
tracking. Our goal is to offer the community a diverse dataset to enable the
design and evaluation of mobile face trackers. The dataset, annotations and the
evaluation server will be on \url{https://mobiface.github.io/}.Comment: To appear on The 14th IEEE International Conference on Automatic Face
and Gesture Recognition (FG 2019
- …