7,090 research outputs found
Characteristics of flight simulator visual systems
The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality
Perception-driven approaches to real-time remote immersive visualization
In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput
Recommended from our members
Reproducing reality with a high-dynamic-range multi-focal stereo display
With well-established methods for producing photo-realistic results, the next big challenge of graphics and display technologies is to achieve perceptual realism --- producing imagery indistinguishable from real-world 3D scenes. To deliver all necessary visual cues for perceptual realism, we built a High-Dynamic-Range Multi-Focal Stereo Display that achieves high resolution, accurate color, a wide dynamic range, and most depth cues, including binocular presentation and a range of focal depth. The display and associated imaging system have been designed to capture and reproduce a small near-eye three-dimensional object and to allow for a direct comparison between virtual and real scenes. To assess our reproduction of realism and demonstrate the capability of the display and imaging system, we conducted an experiment in which the participants were asked to discriminate between a virtual object and its physical counterpart. Our results indicate that the participants can only detect the discrepancy with a probability of 0.44. With such a level of perceptual realism, our display apparatus can facilitate a range of visual experiments that require the highest fidelity of reproduction while allowing for the full control of the displayed stimuli.</jats:p
Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes
The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method
The quality of experience of emerging display technologies
As new display technologies emerge and become part of everyday life, the understanding of the visual experience they provide becomes more relevant. The cognition of perception is the most vital component of visual experience; however, it is not the only cognition that contributes to the complex overall experience of the end-user. Expectations can create significant cognitive bias that may even override what the user
genuinely perceives. Even if a visualization technology is somewhat novel, expectations can be fuelled by prior experiences gained from using similar displays and, more importantly, even a single word or an acronym may induce serious preconceptions, especially if such word suggests excellence in quality. In this interdisciplinary Ph.D. thesis, the effect of minimal, one-word labels on the Quality of Experience (QoE) is investigated in a series of subjective tests. In the studies carried out on an ultra-high-definition (UHD) display, UHD video contents
were directly compared to their HD counterparts, with and without labels explicitly informing the test participants about the resolution of each stimulus. The experiments on High Dynamic Range (HDR) visualization addressed the effect of the word “premium” on the quality aspects of HDR video, and also how this may affect the perceived duration of stalling events. In order to support the findings,
additional tests were carried out comparing the stalling detection thresholds of HDR video with conventional Low Dynamic Range (LDR) video. The third emerging technology addressed by this thesis is light field visualization. Due to its novel nature and the lack of comprehensive, exhaustive research on the QoE of light field displays and content parameters at the time of this thesis, instead
of investigating the labeling effect, four phases of subjective studies were performed on light field QoE. The first phases started with fundamental research, and the experiments progressed towards the concept and evaluation of the dynamic adaptive streaming of light field video, introduced in the final phase
Recommended from our members
Acceleration of Subtractive Non-contrast-enhanced Magnetic Resonance Angiography
Although contrast-enhanced magnetic resonance angiography (CE-MRA) is widely established as a clinical examination for the diagnosis of human vascular diseases, non-contrast-enhanced MRA (NCE-MRA) techniques have drawn increasing attention in recent years. NCE-MRA is based on the intrinsic physical properties of blood and does not require the injection of any exogenous contrast agents. Subtractive NCE-MRA is a class of techniques that acquires two image sets with different vascular signal intensity, which are later subtracted to generate angiograms.
The long acquisition time is an important drawback of NCE-MRA techniques, which not only limits the clinical acceptance of these techniques but also renders them sensitive to artefacts from patient motion. Another problem for subtractive NCE-MRA is the unwanted residual background signal caused by different static background signal levels on the two raw image sets. This thesis aims at improving subtractive NCE-MRA techniques by addressing both these limitations, with a particular focus on three-dimensional (3D) femoral artery fresh blood imaging (FBI).
The structure of the thesis is as follows:
Chapter 1 describes the anatomy and physiology of the vascular system, including the characteristics of arteries and veins, and the MR properties and flow characteristics of blood. These characteristics are the foundation of NCE-MRA technique development.
Chapter 2 introduces commonly used diagnostic angiographic methods, particularly CE-MRA and NCE-MRA. Current NCE-MRA techniques are reviewed and categorised into different types. Their principles, implementations and limitations are summarised.
Chapter 3 describes imaging acceleration theories including compressed sensing (CS), parallel imaging (PI) and partial Fourier (PF). The Split Bregman algorithm is described as an efficient CS reconstruction method. The SPIRiT reconstruction for PI and homodyne detection for PF are also introduced and combined with Split Bregman to form the basis of the reconstruction strategy for undersampled MR datasets. Four image quality metrics are presented for evaluating the quality of reconstructed images.
In Chapter 4, an intensity correction method is proposed to improve background suppression for subtractive NCE-MRA techniques. Residual signals of background tissues are removed by performing a weighted subtraction, in which the weighting factor is obtained by a robust regression method. Image sparsity can also be increased and thereby potentially benefit CS reconstruction in the following chapters.
Chapter 5 investigates the optimal k-space sampling patterns for the 3D accelerated femoral artery FBI sequence. A variable density Poisson-disk with a fully sampled centre region and missing partial Fourier fractions is employed for k-space undersampling in the ky-kz plane. Several key parameters in sampling pattern design, such as partial Fourier sampling ratios, fully sampled centre region size and density decay factor, are evaluated and optimised.
Chapter 6 introduces several reconstruction strategies for accelerated subtractive NCE-MRA. A new reconstruction method, k-space subtraction with phase and intensity correction (KSPIC), is developed. By performing subtraction in k-space, KSPIC can exploit the sparsity of subtracted angiogram data and potentially improve the reconstruction performance. A phase correction procedure is used to restore the polarity of negative signals caused by subtraction. The intensity correction method proposed in Chapter 4 is also incorporated in KSPIC as it improves background suppression and thereby sparsity.
The highly accelerated technique can be used not only to reduce the acquisition time, but also to enable imaging with increased resolution with no time penalty. A time-efficient high-resolution FBI technique is proposed in Chapter 7. By employing KSPIC and modifying the flow-compensation/spoiled gradients, the image matrix size can be increased from 256Ă—256 to up to 512Ă—512 without prolonging the acquisition time.
Chapter 8 summarises the overall achievements and limitations of this thesis, as well as outlines potential future research directions.Cambridge Trust
China Scholarship Council
Addenbrooke’s Charitable Trust
National Institute of Health Research, Cambridge Biomedical Research Cente
Enhancing Mobile Capacity through Generic and Efficient Resource Sharing
Mobile computing devices are becoming indispensable in every aspect of human life, but diverse hardware limits make current mobile devices far from ideal for satisfying the performance requirements of modern mobile applications and being used anytime, anywhere. Mobile Cloud Computing (MCC) could be a viable solution to bypass these limits which enhances the mobile capacity through cooperative resource sharing, but is challenging due to the heterogeneity of mobile devices in both hardware and software aspects. Traditional schemes either restrict to share a specific type of hardware resource within individual applications, which requires tremendous reprogramming efforts; or disregard the runtime execution pattern and transmit too much unnecessary data, resulting in bandwidth and energy waste.To address the aforementioned challenges, we present three novel designs of resource sharing frameworks which utilize the various system resources from a remote or personal cloud to enhance the mobile capacity in a generic and efficient manner. First, we propose a novel method-level offloading methodology to run the mobile computational workload on the remote cloud CPU. Minimized data transmission is achieved during such offloading by identifying and selectively migrating the memory contexts which are necessary to the method execution. Second, we present a systematic framework to maximize the mobile performance of graphics rendering with the remote cloud GPU, during which the redundant pixels across consecutive frames are reused to reduce the transmitted frame data. Last, we propose to exploit the unified mobile OS services and generically interconnect heterogeneous mobile devices towards a personal mobile cloud, which complement and flexibly share mobile peripherals (e.g., sensors, camera) with each other
A mixed reality telepresence system for collaborative space operation
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go.
The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported
- …