2,212 research outputs found
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Development of a Novel Handheld Device for Active Compensation of Physiological Tremor
In microsurgery, the human hand imposes certain limitations in accurately positioning the tip of a device such as scalpel. Any errors in the motion of the hand make microsurgical procedures difficult and involuntary motions such as hand tremors can make some procedures significantly difficult to perform. This is particularly true in the case of vitreoretinal microsurgery. The most familiar source of involuntary motion is physiological tremor. Real-time compensation of tremor is, therefore, necessary to assist surgeons to precisely position and manipulate the tool-tip to accurately perform a microsurgery. In this thesis, a novel handheld device (AID) is described for compensation of physiological tremor in the hand. MEMS-based accelerometers and gyroscopes have been used for sensing the motion of the hand in six degrees of freedom (DOF). An augmented state complementary Kalman filter is used to calculate 2 DOF orientation. An adaptive filtering algorithm, band-limited Multiple Fourier linear combiner (BMFLC), is used to calculate the tremor component in the hand in real-time. Ionic Polymer Metallic Composites (IPMCs) have been used as actuators for deflecting the tool-tip to compensate for the tremor
Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
Augmented Reality (AR) is a trending technology that provides a live view of the real and physical environment augmented by virtual elements, enhancing the information of the scene with digital information (sound, video, graphics, text or geo-location). Its application to architecture, engineering and construction, and facility management (AEC/FM) is straightforward and can be very useful to improve the on-site work at different stages of the projects. However, one of the most important limitations of Mobile Augmented Reality (MAR) is the lack of accuracy when the screen overlays the virtual models on the real images captured by the camera. The main reasons are errors related to tracking (positioning and orientation of the mobile device) and image capture and processing (projection and distortion issues). This paper shows a new methodology to mathematically perform a quantitative evaluation, in world coordinates, of those overlaying discrepancies on the screen, obtaining the real-scale distances from any real point to the sightlines of its virtual projections for any AR application. Additionally, a new utility for filtering built-in sensor signals in mobile devices is presented: the Drift-Vibration-Threshold function (DVT), a straightforward tool to filter the drift suffered by most sensor-based tracking systems
Recommended from our members
Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection
In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen
Recommended from our members
Introduction to location-based mobile learning
[About the book]
The report follows on from a 2-day workshop funded by the STELLAR Network of Excellence as part of their 2009 Alpine Rendez-Vous workshop series and is edited by Elizabeth Brown with a foreword from Mike Sharples. Contributors have provided examples of innovative and exciting research projects and practical applications for mobile learning in a location-sensitive setting, including the sharing of good practice and the key findings that have resulted from this work. There is also a debate about whether location-based and contextual learning results in shallower learning strategies and a section detailing the future challenges for location-based learning
- …