2,238 research outputs found

    Life-sized projector-based dioramas

    Get PDF

    Detection of Non-Stationary Photometric Perturbations on Projection Screens

    Get PDF
    Interfaces based on projection screens have become increasingly more popular in recent years, mainly due to the large screen size and resolution that they provide, as well as their stereo-vision capabilities. This work shows a local method for real-time detection of non-stationary photometric perturbations in projected images by means of computer vision techniques. The method is based on the computation of differences between the images in the projector’s frame buffer and the corresponding images on the projection screen observed by the camera. It is robust under spatial variations in the intensity of light emitted by the projector on the projection surface and also robust under stationary photometric perturbations caused by external factors. Moreover, we describe the experiments carried out to show the reliability of the method

    Imaging methods for understanding and improving visual training in the geosciences

    Get PDF
    Experience in the field is a critical educational component of every student studying geology. However, it is typically difficult to ensure that every student gets the necessary experience because of monetary and scheduling limitations. Thus, we proposed to create a virtual field trip based off of an existing 10-day field trip to California taken as part of an undergraduate geology course at the University of Rochester. To assess the effectiveness of this approach, we also proposed to analyze the learning and observation processes of both students and experts during the real and virtual field trips. At sites intended for inclusion in the virtual field trip, we captured gigapixel resolution panoramas by taking hundreds of images using custom built robotic imaging systems. We gathered data to analyze the learning process by fitting each geology student and expert with a portable eye- tracking system that records a video of their eye movements and a video of the scene they are observing. An important component of analyzing the eye-tracking data requires mapping the gaze of each observer into a common reference frame. We have made progress towards developing a software tool that helps automate this procedure by using image feature tracking and registration methods to map the scene video frames from each eye-tracker onto a reference panorama for each site. For the purpose of creating a virtual field trip, we have a large scale semi-immersive display system that consists of four tiled projectors, which have been colorimetrically and photometrically calibrated, and a curved widescreen display surface. We use this system to present the previously captured panoramas, which simulates the experience of visiting the sites in person. In terms of broader geology education and outreach, we have created an interactive website that uses Google Earth as the interface for visually exploring the panoramas captured for each site

    Continuous Human Activity Tracking over a Large Area with Multiple Kinect Sensors

    Get PDF
    In recent years, researchers had been inquisitive about the use of technology to enhance the healthcare and wellness of patients with dementia. Dementia symptoms are associated with the decline in thinking skills and memory severe enough to reduce a person’s ability to pay attention and perform daily activities. Progression of dementia can be assessed by monitoring the daily activities of the patients. This thesis encompasses continuous localization and behavioral analysis of patient’s motion pattern over a wide area indoor living space using multiple calibrated Kinect sensors connected over the network. The skeleton data from all the sensor is transferred to the host computer via TCP sockets into Unity software where it is integrated into a single world coordinate system using calibration technique. Multiple cameras are placed with some overlap in the field of view for the successful calibration of the cameras and continuous tracking of the patients. Localization and behavioral data are stored in a CSV file for further analysis

    Highlighted depth-of-field photography: Shining light on focus

    Get PDF
    We present a photographic method to enhance intensity differences between objects at varying distances from the focal plane. By combining a unique capture procedure with simple image processing techniques, the detected brightness of an object is decreased proportional to its degree of defocus. A camera-projector system casts distinct grid patterns onto a scene to generate a spatial distribution of point reflections. These point reflections relay a relative measure of defocus that is utilized in postprocessing to generate a highlighted DOF photograph. Trade-offs between three different projectorprocessing pairs are analyzed, and a model is developed to help describe a new intensity-dependent depth of field that is controlled by the pattern of illumination. Results are presented for a primary single snapshot design as well as a scanning method and a comparison method. As an application, automatic matting results are presented.Alfred P. Sloan Foundatio

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users
    corecore