6 research outputs found

    Foveation for 3D visualization and stereo imaging

    Get PDF
    Even though computer vision and digital photogrammetry share a number of goals, techniques, and methods, the potential for cooperation between these fields is not fully exploited. In attempt to help bridging the two, this work brings a well-known computer vision and image processing technique called foveation and introduces it to photogrammetry, creating a hybrid application. The results may be beneficial for both fields, plus the general stereo imaging community, and virtual reality applications. Foveation is a biologically motivated image compression method that is often used for transmitting videos and images over networks. It is possible to view foveation as an area of interest management method as well as a compression technique. While the most common foveation applications are in 2D there are a number of binocular approaches as well. For this research, the current state of the art in the literature on level of detail, human visual system, stereoscopic perception, stereoscopic displays, 2D and 3D foveation, and digital photogrammetry were reviewed. After the review, a stereo-foveation model was constructed and an implementation was realized to demonstrate a proof of concept. The conceptual approach is treated as generic, while the implementation was conducted under certain limitations, which are documented in the relevant context. A stand-alone program called Foveaglyph is created in the implementation process. Foveaglyph takes a stereo pair as input and uses an image matching algorithm to find the parallax values. It then calculates the 3D coordinates for each pixel from the geometric relationships between the object and the camera configuration or via a parallax function. Once 3D coordinates are obtained, a 3D image pyramid is created. Then, using a distance dependent level of detail function, spherical volume rings with varying resolutions throughout the 3D space are created. The user determines the area of interest. The result of the application is a user controlled, highly compressed non-uniform 3D anaglyph image. 2D foveation is also provided as an option. This type of development in a photogrammetric visualization unit is beneficial for system performance. The research is particularly relevant for large displays and head mounted displays. Although, the implementation, because it is done for a single user, would possibly be best suited to a head mounted display (HMD) application. The resulting stereo-foveated image can be loaded moderately faster than the uniform original. Therefore, the program can potentially be adapted to an active vision system and manage the scene as the user glances around, given that an eye tracker determines where exactly the eyes accommodate. This exploration may also be extended to robotics and other robot vision applications. Additionally, it can also be used for attention management and the viewer can be directed to the object(s) of interest the demonstrator would like to present (e.g. in 3D cinema). Based on the literature, we also believe this approach should help resolve several problems associated with stereoscopic displays such as the accommodation convergence problem and diplopia. While the available literature provides some empirical evidence to support the usability and benefits of stereo foveation, further tests are needed. User surveys related to the human factors in using stereo foveated images, such as its possible contribution to prevent user discomfort and virtual simulator sickness (VSS) in virtual environments, are left as future work.reviewe

    Investigation of an emotional virtual human modelling method

    Get PDF
    In order to simulate virtual humans more realistically and enable them life-like behaviours, several exploration research on emotion calculation, synthetic perception, and decision making process have been discussed. A series of sub-modules have been designed and simulation results have been presented with discussion. A visual based synthetic perception system has been proposed in this thesis, which allows virtual humans to detect the surrounding virtual environment through a collision-based synthetic vision system. It enables autonomous virtual humans to change their emotion states according to stimuli in real time. The synthetic perception system also allows virtual humans to remember limited information within their own First-in-first-out short-term virtual memory. The new emotion generation method includes a novel hierarchical emotion structure and a group of emotion calculation equations, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. The work introduces a dynamic emotional motion database structure for virtual human modelling. When developing realistic virtual human behaviours, lots of subjects were motion-captured whilst performing emotional motions with or without intent. The captured motions were endowed to virtual characters and implemented in different virtual scenarios to help evoke and verify design ideas, possible consequences of simulation (such as fire evacuation). This work also introduced simple heuristics theory into decision making process in order to make the virtual human’s decision making more like real human. Emotion values are proposed as a group of the key cues for decision making under the simple heuristic structures. A data interface which connects the emotion calculation and the decision making structure together has also been designed for the simulation system.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Active Visualization in a Multidisplay Immersive Environment

    No full text
    Building a system to actively visualize extremely large data sets on large tiled displays in a real time immersive environment involves a number of challenges. First, the system must be completely scalable to support the rendering of large data sets. Second, it must provide fast, constant frame rates regardless of user viewpoint or model orientation. Third, it must output the highest resolution imagery where it is needed. Fourth, it must have a flexible user interface to control interaction with the display. This paper presents the prototype for a system which meets all four of these criteria. It details the design of a wireless user interface in conjunction with a foveated vision application for image generation on a tiled display wall. The system emphasizes the parallel, multidisplay, and multiresolution features of the Metabuffer image composition architecture to produce interactive renderings of large data streams with fast, constant frame rates
    corecore