624 research outputs found
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Perception-driven approaches to real-time remote immersive visualization
In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput
Virtual Reality to Simulate Visual Tasks for Robotic Systems
Virtual reality (VR) can be used as a tool to analyze the interactions between the visual system
of a robotic agent and the environment, with the aim of designing the algorithms to solve the
visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies
in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of
a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system)
and its interplay with the environment can be modeled through the geometrical relationships
between the virtual stereo cameras and the virtual 3D world. Differently from conventional
applications, where VR is used for the perceptual rendering of the visual information to a
human observer, in the proposed approach, a virtual world is rendered to simulate the actual
projections on the cameras of a robotic system. In this way, machine vision algorithms can be
quantitatively validated by using the ground truth data provided by the knowledge of both
the structure of the environment and the vision system
Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems
World-wide injuries in vehicle accidents have been on the rise in recent
years, mainly due to driver error. The main objective of this research is to
develop a predictive system for driving maneuvers by analyzing the cognitive
behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle
is being driven). Advanced Driving Assistance Systems (ADAS) include
different driving functions, such as vehicle parking, lane departure warning,
blind spot detection, and so on. While much research has been performed on
developing automated co-driver systems, little attention has been paid to the
fact that the driver plays an important role in driving events. Therefore, it
is crucial to monitor events and factors that directly concern the driver. As
a goal, we perform a quantitative and qualitative analysis of driver behavior
to find its relationship with driver intentionality and driving-related actions.
We have designed and developed an instrumented vehicle (RoadLAB) that is
able to record several synchronized streams of data, including the surrounding
environment of the driver, vehicle functions and driver cephalo-ocular behavior,
such as gaze/head information. We subsequently analyze and study the
behavior of several drivers to find out if there is a meaningful relation between
driver behavior and the next driving maneuver
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
An Introduction to 3D User Interface Design
3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article
Life-Sized Audiovisual Spatial Social Scenes with Multiple Characters: MARC & SMART-I²
International audienceWith the increasing use of virtual characters in virtual and mixed reality settings, the coordination of realism in audiovisual rendering and expressive virtual characters becomes a key issue. In this paper we introduce a new system combining two systems for tackling the issue of realism and high quality in audiovisual rendering and life-sized expressive characters. The goal of the resulting SMART-MARC platform is to investigate the impact of realism on multiple levels: spatial audiovisual rendering of a scene, appearance and expressive behaviors of virtual characters. Potential interactive applications include mediated communication in virtual worlds, therapy, game, arts and elearning. Future experimental studies will focus on 3D audio/visual coherence, social perception and ecologically valid interaction scenes
- …