2,484 research outputs found
OSGAR: a scene graph with uncertain transformations
An important problem for augmented reality is registration error. No system can be perfectly tracked, calibrated or modeled. As a result, the overlaid graphics are not aligned perfectly with objects in the physical world. This can be distracting, annoying or confusing. In this paper, we propose a method for mitigating the effects of registration errors that enables application developers to build dynamically adaptive AR displays. Our solution is implemented in a programming toolkit called OSGAR. Built upon OpenSceneGraph (OSG), OSGAR statistically characterizes registration errors, monitors those errors and, when a set of criteria are met, dynamically adapts the display to mitigate the effects of the errors. Because the architecture is based on a scene graph, it provides a simple, familiar and intuitive environment for application developers. We describe the components of OSGAR, discuss how several proposed methods for error registration can be implemented, and illustrate its use through a set of examples
Recommended from our members
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
The Arena: An indoor mixed reality space
ln this paper, we introduce the Arena, an indoor space for mobile mixed reality interaction. The Arena includes a new user tracking system appropriate for AR/MR applications and a new Too/kit oriented to the augmented and mixed reality applications developer, the MX Too/kit. This too/kit is defined at a somewhat higher abstraction levei, by hiding from the programmer low-level implementation details and facilitating ARJMR object-oriented programming. The system handles, uniformly, video input, video output (for headsets and monitors), sound aurelisation and Multimodal Human-Computer Interaction in ARJMR, including, tangible interfaces, speech recognition and gesture recognition.info:eu-repo/semantics/publishedVersio
An Inertial Device-based User Interaction with Occlusion-free Object Handling in a Handheld Augmented Reality
Augmented Reality (AR) is a technology used to merge virtual objects with real environments in real-time. In AR, the interaction which occurs between the end-user and the AR system has always been the frequently discussed topic. In addition, handheld AR is a new approach in which it delivers enriched 3D virtual objects when a user looks through the device’s video camera. One of the most accepted handheld devices nowadays is the smartphones which are equipped with powerful processors and cameras for capturing still images and video with a range of sensors capable of tracking location, orientation and motion of the user. These modern smartphones offer a sophisticated platform for implementing handheld AR applications. However, handheld display provides interface with the interaction metaphors which are developed with head-mounted display attached along and it might restrict with hardware which is inappropriate for handheld. Therefore, this paper will discuss a proposed real-time inertial device-based interaction technique for 3D object manipulation. It also explains the methods used such for selection, holding, translation and rotation. It aims to improve the limitation in 3D object manipulation when a user can hold the device with both hands without requiring the need to stretch out one hand to manipulate the 3D object. This paper will also recap of previous works in the field of AR and handheld AR. Finally, the paper provides the experimental results to offer new metaphors to manipulate the 3D objects using handheld devices
Bioengineering, augmented reality, and robotic surgery in vascular surgery: A literature review
Biomedical engineering integrates a variety of applied sciences with life sciences to improve human health and reduce the invasiveness of surgical procedures. Technological advances, achieved through biomedical engineering, have contributed to significant improvements in the field of vascular and endovascular surgery. This paper aims to review the most cutting-edge technologies of the last decade involving the use of augmented reality devices and robotic systems in vascular surgery, highlighting benefits and limitations. Accordingly, two distinct literature surveys were conducted through the PubMed database: the first review provides a comprehensive assessment of augmented reality technologies, including the different techniques available for the visualization of virtual content (11 papers revised); the second review collects studies with bioengineering content that highlight the research trend in robotic vascular surgery, excluding works focused only on the clinical use of commercially available robotic systems (15 papers revised). Technological flow is constant and further advances in imaging techniques and hardware components will inevitably bring new tools for a clinical translation of innovative therapeutic strategies in vascular surgery
Requirement analysis and sensor specifications – First version
In this first version of the deliverable, we make the following contributions: to design the
WEKIT capturing platform and the associated experience capturing API, we use a
methodology for system engineering that is relevant for different domains such as: aviation,
space, and medical and different professions such as: technicians, astronauts, and medical
staff. Furthermore, in the methodology, we explore the system engineering process and how
it can be used in the project to support the different work packages and more importantly
the different deliverables that will follow the current.
Next, we provide a mapping of high level functions or tasks (associated with experience
transfer from expert to trainee) to low level functions such as: gaze, voice, video, body
posture, hand gestures, bio-signals, fatigue levels, and location of the user in the
environment. In addition, we link the low level functions to their associated sensors.
Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their
technical specifications, possible limitations, standards, and platforms.
We outline a set of recommendations pertaining to the sensors that are most relevant for
the WEKIT project taking into consideration the environmental, technical and human
factors described in other deliverables. We recommend Microsoft Hololens (for Augmented
reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift
(for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand
gesture tracking). For eye tracking, an existing eye-tracking system can be customised to
complement the augmented reality glasses, and built-in microphone of the augmented
reality glasses can capture the expert’s voice. We propose a modular approach for the design
of the WEKIT experience capturing system, and recommend that the capturing system
should have sufficient storage or transmission capabilities.
Finally, we highlight common issues associated with the use of different sensors. We
consider that the set of recommendations can be useful for the design and integration of the
WEKIT capturing platform and the WEKIT experience capturing API to expedite the time
required to select the combination of sensors which will be used in the first prototype.WEKI
Augmented reality device for first response scenarios
A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*.
1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer
- …