3,697 research outputs found

    Photo-Realistic Scenes with Cast Shadows Show No Above/Below Search Asymmetries for Illumination Direction

    Full text link
    Visual search is extended from the domain of polygonal figures presented on a uniform field to photo-realistic scenes containing target objects in dense, naturalistic backgrounds. The target in a trial is a computer-rendered rock protruding in depth from a "wall" of rocks of roughly similar size but different shapes. Subjects responded "present" when one rock appeared closer than the rest, owing to occlusions or cast shadows, and "absent" when all rocks appeared to be at the same depth. Results showed that cast shadows can significantly decrease reaction times compared to scenes with no cast shadows, in which the target was revealed only by occlusions of rocks behind it. A control experiment showed that cast shadows can be utilized even for displays involving rocks of several achromatic surface colors (dark through light), in which the shadow cast by the target rock was not the darkest region in the scene. Finally, in contrast with reports of experiments by others involving polygonal figures, we found no evidence for an effect of illumination direction (above vs. below) on search times.Office of Naval Research (N00014-94-1-0597, N00014-95-1-0409

    Photo-Realistic Scenes with Cast Shadows Show No Above/Below Search Asymmetries for Illumination Direction

    Full text link
    Visual search is extended from the domain of polygonal figures presented on a uniform field to photo-realistic scenes containing target objects in dense, naturalistic backgrounds. The target in a trial is a computer-rendered rock protruding in depth from a "wall" of rocks of roughly similar size but different shapes. Subjects responded "present" when one rock appeared closer than the rest, owing to occlusions or cast shadows, and "absent" when all rocks appeared to be at the same depth. Results showed that cast shadows can significantly decrease reaction times compared to scenes with no cast shadows, in which the target was revealed only by occlusions of rocks behind it. A control experiment showed that cast shadows can be utilized even for displays involving rocks of several achromatic surface colors (dark through light), in which the shadow cast by the target rock was not the darkest region in the scene. Finally, in contrast with reports of experiments by others involving polygonal figures, we found no evidence for an effect of illumination direction (above vs. below) on search times.Office of Naval Research (N00014-94-1-0597, N00014-95-1-0409

    Virtual image out-the-window display system study. Volume 2 - Appendix

    Get PDF
    Virtual image out-the-window display system imaging techniques and simulation devices - appendices containing background materia

    VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

    Full text link
    We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.Comment: Accepted to SIGGRAPH 201

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 341)

    Get PDF
    This bibliography lists 133 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during September 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    Bridging the Domain-Gap in Computer Vision Tasks

    Get PDF

    Experience Design for Virtual Reality. From Illusion to Agency.

    Get PDF
    Virtual Reality (VR) allow viewers to inhabit and interact with virtual spaces in a way that has the potential to be much more compelling than any other medium, breaking through the barrier between merely watching to experiencing a situation or environment. It has an experiential quality by integrating the domains of interactive video games, filmmaking, storytelling and immersion. A balancing act between narrative design, digital placemaking and user agency. In this article, written from a practitioner’s perspective, I propose and demonstrate strategies in how immersive experiences can utilise multiple modes of representations, such as omnidirectional stereoscopic video and real-time 3D rendered geometry, to form a coherent spatial narrative environment for a viewer in VR. Particular emphasis will be placed on factors in visual perception; experience design including narration, scenography and user agency; and the technical conditions of the medium. This insight emerged from a series of recent VR projects, which are fundamentally different in terms of content, design and production techniques, but this diversity is an opportunity to lay the foundations for a classification system for VR experiences and establish a common language for this exciting new medium
    corecore