95,457 research outputs found
Recommended from our members
Perception of perspective in augmented reality head-up displays
Augmented Reality (AR) is emerging fast with a wide range of applications, including automotive AR Head-Up Displays (AR HUD). As a result, there is a growing need to understand human perception of depth in AR. Here, we discuss two user studies on depth perception, in particular the perspective cue. The fi rst experiment compares the perception of the perspective depth cue (1) in the physical world, (2) on a at-screen, and (3) on an AR HUD. Our AR HUD setup provided a two-dimensional vertically oriented virtual image projected at a fi xed distance. In each setting, participants were asked to estimate the size of a perspective angle. We found that the perception of angle sizes on AR HUD differs from perception in the physical world, but not from a at-screen. The
underestimation of the physical world's angle size compared to the AR HUD and screen setup might explain the egocentric depth underestimation phenomenon in virtual environments. In the second experiment, we compared perception for different graphical representations of angles that are relevant for practical
applications. Graphical alterations of angles displayed on a screen resulted in more variation between individuals' angle size estimations. Furthermore, the majority of the participants tends to underestimate the observed angle size in most conditions. Our results suggest that perspective angles on a vertically oriented fixed-depth AR HUD display mimics more accurately the perception of a screen, rather than the perception of the 3D environment. On-screen graphical alteration does not help to improve the underestimation in the majority of cases
Synchronizing Gestures with Friction Sounds: Work in Progress
International audienceThis paper presents a work in progress dealing with the sensorimotor relation between auditory perception and graphical movements. An experiment where subjects were asked to synchronize their gestures with synthetic friction sounds is presented. A first qualitative analysis enabled to evaluate the influence of different intrinsic sound parameters on the characteristics of the synchronized gesture. This preliminary experiment provides a formal framework for a wider study which aims to evaluate the relation between audition, vision and gestures
THE ROLE OF EMOTION IN VISUALIZATION
The popular notion that emotion and reason are incompatible is no longer defensi- ble. Recent research in psychology and cognitive science has established emotion as a key element in numerous aspects of perception and cognition, including attention, memory, decision-making, risk perception, and creativity. This dissertation centers around the observation that emotion influences many aspects of perception and cog- nition that are crucial for effective visualization.
First, I demonstrate that emotion influences accuracy in fundamental visualiza- tion tasks by combining a classic graphical perception experiment (from Cleveland and McGill) with emotion induction procedures from psychology (chapter 3). Next, I expand on the experiments in the first chapter to explore additional techniques for studying emotion and visualization, resulting in an experiment that shows that performance differences between primed individuals persist even as task difficulty in- creases (chapter 4). In a separate experiment, I show how certain emotional states (i.e. frustration and engagement) can be inferred from visualization interaction logs using machine learning (chapter 5). I then discuss a model for individual cognitive dif- ferences in visualization, which situates emotion into existing individual differences research in visualization (chapter 6). Finally, I propose an preliminary model for emotion in visualization (chapter 7)
Data-ink Ratio and Task Complexity in Graph Comprehension
Human processing of graphical information is a topic which has wide-reaching implications for decision-making in a variety of contexts. A deeper understanding of the processes of graphical perception can lead to the development of design guidelines which can enhance performance in graphical perception tasks. This study evaluates the data-ink ratio guideline, which recommends the removal of non-data graph elements, resulting in minimalist graph designs. In an experiment, participants answered graph comprehension questions using bar graphs and boxplots with varying data-ink ratios. Participants answered questions with similar levels of accuracy and mental effort. Some participants drew on graphs, reducing the data-ink ratio of high and medium data-ink stimuli. Additionally, expert interviews were conducted regarding graph use, graph creation, and opinions about the data-ink concept and example graphs. Interviewees had a variety of opinions and preferences with regard to graph design, many of which were dependent upon the specific circumstances of presentation. Most interviewees did not think that high data-ink graph designs were superior. These results suggest that data-ink maximization does not improve performance in graph comprehensions tasks, and that arguments regarding the data-ink ratio deal with the subjective issue of graph aesthetics
Improving users’ comprehension of changes with animation and sound: an empirical assessment
Animation or sound is often used in user interfaces as an attempt to improve users' perception and comprehension of evolving situations and support them in decision-making. However, empirical data establishing their real effectiveness on the comprehension of changes are still lacking. We have carried out an experiment using four combinations of visual and auditory feedback in a split attention task. The results not only confirm that such feedback improves the perception of changes, but they also demonstrate that animation and sound used alone or combined bring major improvements on the comprehension of a changing situation. Based on these results, we propose design guidelines about the most efficient combinations to be used in user interfaces
Multimodal Hierarchical Dirichlet Process-based Active Perception
In this paper, we propose an active perception method for recognizing object
categories based on the multimodal hierarchical Dirichlet process (MHDP). The
MHDP enables a robot to form object categories using multimodal information,
e.g., visual, auditory, and haptic information, which can be observed by
performing actions on an object. However, performing many actions on a target
object requires a long time. In a real-time scenario, i.e., when the time is
limited, the robot has to determine the set of actions that is most effective
for recognizing a target object. We propose an MHDP-based active perception
method that uses the information gain (IG) maximization criterion and lazy
greedy algorithm. We show that the IG maximization criterion is optimal in the
sense that the criterion is equivalent to a minimization of the expected
Kullback--Leibler divergence between a final recognition state and the
recognition state after the next set of actions. However, a straightforward
calculation of IG is practically impossible. Therefore, we derive an efficient
Monte Carlo approximation method for IG by making use of a property of the
MHDP. We also show that the IG has submodular and non-decreasing properties as
a set function because of the structure of the graphical model of the MHDP.
Therefore, the IG maximization problem is reduced to a submodular maximization
problem. This means that greedy and lazy greedy algorithms are effective and
have a theoretical justification for their performance. We conducted an
experiment using an upper-torso humanoid robot and a second one using synthetic
data. The experimental results show that the method enables the robot to select
a set of actions that allow it to recognize target objects quickly and
accurately. The results support our theoretical outcomes.Comment: submitte
Towards Social Autonomous Vehicles: Efficient Collision Avoidance Scheme Using Richardson's Arms Race Model
Background Road collisions and casualties pose a serious threat to commuters
around the globe. Autonomous Vehicles (AVs) aim to make the use of technology
to reduce the road accidents. However, the most of research work in the context
of collision avoidance has been performed to address, separately, the rear end,
front end and lateral collisions in less congested and with high
inter-vehicular distances. Purpose The goal of this paper is to introduce the
concept of a social agent, which interact with other AVs in social manners like
humans are social having the capability of predicting intentions, i.e.
mentalizing and copying the actions of each other, i.e. mirroring. The proposed
social agent is based on a human-brain inspired mentalizing and mirroring
capabilities and has been modelled for collision detection and avoidance under
congested urban road traffic.
Method We designed our social agent having the capabilities of mentalizing
and mirroring and for this purpose we utilized Exploratory Agent Based Modeling
(EABM) level of Cognitive Agent Based Computing (CABC) framework proposed by
Niazi and Hussain.
Results Our simulation and practical experiments reveal that by embedding
Richardson's arms race model within AVs, collisions can be avoided while
travelling on congested urban roads in a flock like topologies. The performance
of the proposed social agent has been compared at two different levels.Comment: 48 pages, 21 figure
- …