874 research outputs found

    Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

    Get PDF
    This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore

    Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays

    Get PDF
    In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics

    Augmented Reality Simulation Modules for EVD Placement Training and Planning Aids

    Get PDF
    When a novice neurosurgeon performs a psychomotor surgical task (e.g., tool navigation into brain structures), a potential risk of damaging healthy tissues and eloquent brain structures is unavoidable. When novices make multiple hits, thus a set of undesirable trajectories is created, and resulting in the potential for surgical complications. Thus, it is important that novices not only aim for a high-level of surgical mastery but also receive deliberate training in common neurosurgical procedures and underlying tasks. Surgical simulators have emerged as an adequate candidate as effective method to teach novices in safe and free-error training environments. The design of neurosurgical simulators requires a comprehensive approach to development and. In that in mind, we demonstrate a detailed case study in which two Augmented Reality (AR) training simulation modules were designed and implemented through the adoption of Model-driven Engineering. User performance evaluation is a key aspect of the surgical simulation validity. Many AR surgical simulators become obsolete; either they are not sufficient to support enough surgical scenarios, or they were validated according to subjective assessments that did not meet every need. Accordingly, we demonstrate the feasibility of the AR simulation modules through two user studies, objectively measuring novices’ performance based on quantitative metrics. Neurosurgical simulators are prone to perceptual distance underestimation. Few investigations were conducted for improving user depth perception in head-mounted display-based AR systems with perceptual motion cues. Consequently, we report our investigation’s results about whether or not head motion and perception motion cues had an influence on users’ performance

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Phenomenal regression as a potential metric of veridical perception in virtual environments

    Get PDF
    It is known that limitations of the visual presentation and sense of presence in a virtual environment (VE) can result in deficits of spatial perception such as the documented depth compression phenomena. Investigating size and distance percepts in a VE is an active area of research, where different groups have measured the deficit by employing skill-based tasks such as walking, throwing or simply judging sizes and distances. A psychological trait called phenomenal regression (PR), first identified in the 1930s by Thouless, offers a measure that does not rely on either judgement or skill. PR describes a systematic error made by subjects when asked to match the perspective projections of two stimuli displayed at different distances. Thouless’ work found that this error is not mediated by a subject’s prior knowledge of its existence, nor can it be consciously manipulated, since it measures an individual’s innate reaction to visual stimuli. Furthermore he demonstrated that, in the real world, PR is affected by the depth cues available for viewing a scene. When applied in a VE, PR therefore potentially offers a direct measure of perceptual veracity that is independent of participants’ skill in judging size or distance. Experimental work has been conducted and a statistically significant correlation of individuals’ measured PR values (their ‘Thouless ratio’, or TR) between virtual and physical stimuli was found. A further experiment manipulated focal depth to mitigate the mismatch that occurs between accommodation and vergence cues in a VE. The resulting statistically significant effect on TR demonstrates that it is sensitive to changes in viewing conditions in a VE. Both experiments demonstrate key properties of PR that contribute to establishing it as a robust indicator of VE quality. The first property is that TR exhibits temporal stability during the period of testing and the second is that it differs between individuals. This is advantageous as it yields empirical values that can be investigated using regression analysis. This work contributes to VE domains in which it is desirable to replicate an accurate perception of space, such as training and telepresence, where PR would be a useful tool for comparing subjective experience between a VE and the real world, or between different VEs

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
    • …
    corecore