21 research outputs found

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Evaluation of Perspective and Coplanar Cockpit Displays of Traffic Information to Support Hazard Awareness in Free Flight

    Get PDF
    We examined the cockpit display representation of traffic, to support the pilot in tactical planning and conflict avoidance. Such displays may support the "free flight" concept, but can also support greater situation awareness in a non-free flight environment. Two perspective views and a coplanar display were contrasted in scenarios in which pilots needed to navigate around conflicting traffic, either in the absence (low workload) or presence (high workload) of a second intruder aircraft. All three formats were configured with predictive aiding vectors that explicitly represented the predicted point of closest pass, and predicted penetration of an alert zone around ownship. Ten pilots were assigned to each of the display conditions, and each flew a series of 60 conflict maneuvers that varied in their workload and the complexity of the conflict geometry. Results indicated a tendency to choose vertical over lateral maneuvers, a tendency which was amplified with the coplanar display. Vertical maneuvers by the intruder produced an added source of workload. Importantly, the coplanar display supported performance in all measures that was equal to or greater than either of the perspective displays (i.e., fewer predicted and actual conflicts, less extreme maneuvers). Previous studies that have indicated perspective superiority have only contrasted these with UNIplanar displays rather than the coplanar display used here

    Use of Depth Perception for the Improved Understanding of Hydrographic Data

    Get PDF
    This thesis has reviewed how increased depth perception can be used to increase the understanding of hydrographic data First visual cues and various visual displays and techniques were investigated. From this investigation 3D stereoscopic techniques prove to be superior in improving the depth perception and understanding of spatially related data and a further investigation on current 3D stereoscopic visualisation techniques was carried out. After reviewing how hydrographic data is currently visualised it was decided that the chromo stereoscopic visualisation technique is preferred to be used for further research on selected hydrographic data models. A novel chromo stereoscopic application was developed and the results from the evaluation on selected hydrographic data models clearly show an improved depth perception and understanding of the data models

    Contributions of Pictorial and Binocular Cues to the Perception of Distance in Virtual Reality

    Get PDF
    We assessed the contribution of binocular disparity and the pictorial cues of linear perspective, texture, and scene clutter to the perception of distance in consumer virtual reality. As additional cues are made available, distance perception is predicted to improve, as measured by a reduction in systematic bias, and an increase in precision. We assessed (1) whether space is non-linearly distorted; (2) the degree of size constancy across changes in distance; and (3) the weighting of pictorial versus binocular cues in VR. In the first task, participants positioned two spheres so as to divide the egocentric distance to a reference stimulus (presented between 3 and 11 m) into three equal thirds. In the second and third tasks, participants set the size of a sphere, presented at the same distances and at eye-height, to match that of a hand-held football. Each task was performed in four environments varying in the available cues. We measured accuracy by identifying systematic biases in responses, and precision as the standard deviation of these responses. While there was no evidence of non-linear compression of space, participants did tend to underestimate distance linearly, but this bias was reduced with the addition of each cue. The addition of binocular cues, when rich pictorial cues were already available, reduced both the bias and variability of estimates. These results show that linear perspective and binocular cues, in particular, improve the accuracy and precision of distance estimates in virtual reality across a range of distances typical of many indoor environments

    An analysis of perceptual errors in perspective displays

    Get PDF
    Display dimensionality 1s one of the most debated issues m the design of cockpit-based displays of air traffic information. Many practitioners agree that presenting air traffic information on an integrated 30 display is preferable, when compared to presenting it on a 20 planar or co-planar display. However, research has shown that operators make errors in estimating the location of objects in 30 displays. This may be because of perceptual distortions caused by the geometric parameters used to generate the image. However, despite the issues identified regarding the locating of objects, some studies have found other performance advantages associated with these 30 displays. Therefore, it seems that attempts should be made to minimise perceptual biases so that these displays can be utilised to present integrated information in 30 environments. The aim of this thesis was to develop a model of distance estimation errors in perspective displays. It was hypothesised that many of the perceptual errors observed in perspective displays, such as azimuth and inter-object distance estimation errors, were related to observers wrongly estimating the distance between themselves and objects in the virtual world. Four experiments examining inter-object distance estimation were conducted. Participants were required to set a perspective image of a box to represent a perfect cube (requiring them to make a distance estimation scaled relative to the frontoparallel plane). Results showed that participants made inter-object distance estimation errors that increased as the distance between the observer and the objects in the display increased. Based on these results, two models explaining interobject distance estimation errors were developed. The first model postulated that participants underestimated the distance between themselves and objects in the display. The second model suggested that participants used 20 ( on-screen) cues to set the box to a cube. These models were applied to azimuth estimation errors observed in studies of perspective displays. It was found that while azimuth estimation error could only be partially modelled as a distance perception error, it could be explained to a greater extent by applying a strategy based on the 20 (on-screen) image. The findings of this study indicated that either distance estimation errors or 20 strategies could account for inter-object distance estimation errors in perspective displays

    Foveation for 3D visualization and stereo imaging

    Get PDF
    Even though computer vision and digital photogrammetry share a number of goals, techniques, and methods, the potential for cooperation between these fields is not fully exploited. In attempt to help bridging the two, this work brings a well-known computer vision and image processing technique called foveation and introduces it to photogrammetry, creating a hybrid application. The results may be beneficial for both fields, plus the general stereo imaging community, and virtual reality applications. Foveation is a biologically motivated image compression method that is often used for transmitting videos and images over networks. It is possible to view foveation as an area of interest management method as well as a compression technique. While the most common foveation applications are in 2D there are a number of binocular approaches as well. For this research, the current state of the art in the literature on level of detail, human visual system, stereoscopic perception, stereoscopic displays, 2D and 3D foveation, and digital photogrammetry were reviewed. After the review, a stereo-foveation model was constructed and an implementation was realized to demonstrate a proof of concept. The conceptual approach is treated as generic, while the implementation was conducted under certain limitations, which are documented in the relevant context. A stand-alone program called Foveaglyph is created in the implementation process. Foveaglyph takes a stereo pair as input and uses an image matching algorithm to find the parallax values. It then calculates the 3D coordinates for each pixel from the geometric relationships between the object and the camera configuration or via a parallax function. Once 3D coordinates are obtained, a 3D image pyramid is created. Then, using a distance dependent level of detail function, spherical volume rings with varying resolutions throughout the 3D space are created. The user determines the area of interest. The result of the application is a user controlled, highly compressed non-uniform 3D anaglyph image. 2D foveation is also provided as an option. This type of development in a photogrammetric visualization unit is beneficial for system performance. The research is particularly relevant for large displays and head mounted displays. Although, the implementation, because it is done for a single user, would possibly be best suited to a head mounted display (HMD) application. The resulting stereo-foveated image can be loaded moderately faster than the uniform original. Therefore, the program can potentially be adapted to an active vision system and manage the scene as the user glances around, given that an eye tracker determines where exactly the eyes accommodate. This exploration may also be extended to robotics and other robot vision applications. Additionally, it can also be used for attention management and the viewer can be directed to the object(s) of interest the demonstrator would like to present (e.g. in 3D cinema). Based on the literature, we also believe this approach should help resolve several problems associated with stereoscopic displays such as the accommodation convergence problem and diplopia. While the available literature provides some empirical evidence to support the usability and benefits of stereo foveation, further tests are needed. User surveys related to the human factors in using stereo foveated images, such as its possible contribution to prevent user discomfort and virtual simulator sickness (VSS) in virtual environments, are left as future work.reviewe

    Stereoscopic 3D user interfaces : exploring the potentials and risks of 3D displays in cars

    Get PDF
    During recent years, rapid advancements in stereoscopic digital display technology has led to acceptance of high-quality 3D in the entertainment sector and even created enthusiasm towards the technology. The advent of autostereoscopic displays (i.e., glasses-free 3D) allows for introducing 3D technology into other application domains, including but not limited to mobile devices, public displays, and automotive user interfaces - the latter of which is at the focus of this work. Prior research demonstrates that 3D improves the visualization of complex structures and augments virtual environments. We envision its use to enhance the in-car user interface by structuring the presented information via depth. Thus, content that requires attention can be shown close to the user and distances, for example to other traffic participants, gain a direct mapping in 3D space

    Methodology for assessment of cognitive skills in virtual environments

    Get PDF
    The client briefing of the proposed building design is usually in the form of drawingsand artistic impressions being presented to the client. However, very few clients areable to read a technical drawing and the artist impressions are limited and do not aidthe client to visualise all aspects of the proposed building. During the client briefingprocess the client needs to have the experiential quality described, to be able to fullyunderstand the design of the proposed building. Generally, humans perceive anddirectly experience architectural space by building qualities like texture, form, colour,light, scale, movement. A full-scale model of the proposed building would fullyafford the experimental qualities. In reality it would be impractical and not costeffective. However, VR technology allows the creation of an inclusion of space inuser's mind, through a minimum of means, but achieves a maximum impact, andaffords all the experiential qualities offered by a physical model.A virtual model with a high degree of detail which can be explored by the designerand his clients will therefore be of significant help. However, to give clients the bestpossible impression of the proposed design it is important to understand howdimensions of those designed spaces are perceived. Therefore, a study was carried outfocusing on fundamental investigations into the perception of basic architecturaldimensions in order to assess the potential usefulness of VR technology inarchitecture and the client briefing process. In two experiments, subjects were required to estimate egocentric and exocentricdimensions in Virtual Environments and Real World Setting (RWS). The influence ofstimuli orientation was also investigated. In estimating all dimensions a magnitudeestimation procedure was employed using a modified free-modulus technique. Allparticipants were pre-tested. Psychometric and visual tests were used for choosing anexperimental group with a fair degree of homogenity. Two independent subject groupswere used. In addition to dimension estimations recall of simple layout and feeling ofspace were investigated when evaluating the virtual interface.The general null hypothesis assumed that people perceive space in VE as well as inthe real world. It has been shown that the results are statistically significant andtherefore one was able to reject the general hypothesis. Overall participantsunderestimated the dimensions in both experiments by approximately 20%. Resultsand limitations of the study are discussed. The results of the experiments wouldindicate that VR technology can be used for simulations of architectural spacesbecause despite underestimations of dimensions it still performed relatively well ifone compares it with results of experiments in the Real World Settings

    Quality-controlled audio-visual depth in stereoscopic 3D media

    Get PDF
    BACKGROUND: The literature proposes several algorithms that produce “quality-controlled” stereoscopic depth in 3D films by limiting the stereoscopic depth to a defined depth budget. Like stereoscopic displays, spatial sound systems provide the listener with enhanced (auditory) depth cues, and are now commercially available in multiple forms. AIM: We investigate the implications of introducing auditory depth cues to quality-controlled 3D media, by asking: “Is it important to quality-control audio-visual depth by considering audio-visual interactions, when integrating stereoscopic display and spatial sound systems?” MOTIVATION: There are several reports in literature of such “audio-visual interactions”, in which visual and auditory perception influence each other. We seek to answer our research question by investigating whether these audio-visual interactions could extend the depth budget used in quality-controlled 3D media. METHOD/CONCLUSIONS: The related literature is reviewed before presenting four novel experiments that build upon each other’s conclusions. In the first experiment, we show that content created with a stereoscopic depth budget creates measurable positive changes in audiences’ attitude towards 3D films. These changes are repeatable for different locations, displays and content. In the second experiment we calibrate an audio-visual display system and use it to measure the minimum audible depth difference. Our data is used to formulate recommendations for content designers and systems engineers. These recommendations include the design of an auditory depth perception screening test. We then show that an auditory-visual stimulus with a nearer auditory depth is perceived as nearer. We measure the impact of this effect upon a relative depth judgement, and investigate how the impact varies with audio-visual depth separation. Finally, the size of the cross-modal bias in depth is measured, from which we conclude that sound does have the potential to extend the depth budget by a small, but perceivable, amount
    corecore