7,994 research outputs found
Vision-based continuous Graffiti™-like text entry system
It is now possible to design real-time, low-cost computer version systems even in personal computers due to the recent advances in electronics and the computer industry. Due to this reason, it is feasible to develop computer-vision-based human-computer interaction systems. A vision-based continuous Graffiti™-like text entry system is presented. The user sketches characters in a Griffiti™-like alphabet in a continuous manner on a flat surface using a laser pointer. The beam of the laser pointer is tracked on the image sequences captured by a camera, and the corresponding written word is recognized from the extracted trace of the laser beam. © 2004 Society of Photo-Optical Instrumentation Engineers
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality
Virtual reality (VR) headsets are enabling a wide range of new
opportunities for the user. For example, in the near future users
may be able to visit virtual shopping malls and virtually join
international conferences. These and many other scenarios pose
new questions with regards to privacy and security, in particular
authentication of users within the virtual environment. As a first
step towards seamless VR authentication, this paper investigates
the direct transfer of well-established concepts (PIN, Android
unlock patterns) into VR. In a pilot study (N = 5) and a lab
study (N = 25), we adapted existing mechanisms and evaluated
their usability and security for VR. The results indicate that
both PINs and patterns are well suited for authentication in
VR. We found that the usability of both methods matched the
performance known from the physical world. In addition, the
private visual channel makes authentication harder to observe,
indicating that authentication in VR using traditional concepts
already achieves a good balance in the trade-off between usability
and security. The paper contributes to a better understanding of
authentication within VR environments, by providing the first
investigation of established authentication methods within VR,
and presents the base layer for the design of future authentication
schemes, which are used in VR environments only
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit
We present 3DTouch, a novel 3D wearable input device worn on the fingertip
for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D
input device that is self-contained, mobile, and universally working across
various 3D platforms. This paper presents a low-cost solution to designing and
implementing such a device. Our approach relies on relative positioning
technique using an optical laser sensor and a 9-DOF inertial measurement unit.
3DTouch is self-contained, and designed to universally work on various 3D
platforms. The device employs touch input for the benefits of passive haptic
feedback, and movement stability. On the other hand, with touch interaction,
3DTouch is conceptually less fatiguing to use over many hours than 3D spatial
input devices. We propose a set of 3D interaction techniques including
selection, translation, and rotation using 3DTouch. An evaluation also
demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for
subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a
whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure
Open World Assistive Grasping Using Laser Selection
Many people with motor disabilities are unable to complete activities of
daily living (ADLs) without assistance. This paper describes a complete robotic
system developed to provide mobile grasping assistance for ADLs. The system is
comprised of a robot arm from a Rethink Robotics Baxter robot mounted to an
assistive mobility device, a control system for that arm, and a user interface
with a variety of access methods for selecting desired objects. The system uses
grasp detection to allow previously unseen objects to be picked up by the
system. The grasp detection algorithms also allow for objects to be grasped in
cluttered environments. We evaluate our system in a number of experiments on a
large variety of objects. Overall, we achieve an object selection success rate
of 88% and a grasp detection success rate of 90% in a non-mobile scenario, and
success rates of 89% and 72% in a mobile scenario
- …