1,912 research outputs found
Augmented Reality
Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
Immersive Insights: A Hybrid Analytics System for Collaborative Exploratory Data Analysis
In the past few years, augmented reality (AR) and virtual reality (VR)
technologies have experienced terrific improvements in both accessibility and
hardware capabilities, encouraging the application of these devices across
various domains. While researchers have demonstrated the possible advantages of
AR and VR for certain data science tasks, it is still unclear how these
technologies would perform in the context of exploratory data analysis (EDA) at
large. In particular, we believe it is important to better understand which
level of immersion EDA would concretely benefit from, and to quantify the
contribution of AR and VR with respect to standard analysis workflows.
In this work, we leverage a Dataspace reconfigurable hybrid reality
environment to study how data scientists might perform EDA in a co-located,
collaborative context. Specifically, we propose the design and implementation
of Immersive Insights, a hybrid analytics system combining high-resolution
displays, table projections, and augmented reality (AR) visualizations of the
data.
We conducted a two-part user study with twelve data scientists, in which we
evaluated how different levels of data immersion affect the EDA process and
compared the performance of Immersive Insights with a state-of-the-art,
non-immersive data analysis system.Comment: VRST 201
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
A Generic Framework and Library for Exploration of Small Multiples through Interactive Piling
Small multiples are miniature representations of visual information used
generically across many domains. Handling large numbers of small multiples
imposes challenges on many analytic tasks like inspection, comparison,
navigation, or annotation. To address these challenges, we developed a
framework and implemented a library called Piling.js for designing interactive
piling interfaces. Based on the piling metaphor, such interfaces afford
flexible organization, exploration, and comparison of large numbers of small
multiples by interactively aggregating visual objects into piles. Based on a
systematic analysis of previous work, we present a structured design space to
guide the design of visual piling interfaces. To enable designers to
efficiently build their own visual piling interfaces, Piling.js provides a
declarative interface to avoid having to write low-level code and implements
common aspects of the design space. An accompanying GUI additionally supports
the dynamic configuration of the piling interface. We demonstrate the
expressiveness of Piling.js with examples from machine learning,
immunofluorescence microscopy, genomics, and public health.Comment: - Extended Section 4 to improve the clarity of our rationale -
Expanded Section 7 to elaborate on the intended target user, the lessons
learned from implementing the use cases, and the limitations of visual piling
interfaces - Added Figure S1 and S4 and Table S1 to the supplementary
material - Improved the clarity of our writing in several other sections, and
we corrected grammar and typo
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
- …