15,986 research outputs found

    Flash-lag chimeras: the role of perceived alignment in the composite face effect

    Get PDF
    Spatial alignment of different face halves results in a configuration that mars the recognition of the identity of either face half (). What would happen to the recognition performance for face halves that were aligned on the retina but were perceived as misaligned, or were misaligned on the retina but were perceived as aligned? We used the 'flash-lag' effect () to address these questions. We created chimeras consisting of a stationary top half-face initially aligned with a moving bottom half-face. Flash-lag chimeras were better recognized than their stationary counterparts. However when flashed face halves were presented physically ahead of moving halves thereby nulling the flash-lag effect, recognition was impaired. This counters the notion that relative movement between the two face halves per se is sufficient to explain better recognition of flash-lag chimeras. Thus, the perceived spatial alignment of face halves (despite retinal misalignment) impairs recognition, while perceived misalignment (despite retinal alignment) does not

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Vision-Based Production of Personalized Video

    No full text
    In this paper we present a novel vision-based system for the automated production of personalised video souvenirs for visitors in leisure and cultural heritage venues. Visitors are visually identified and tracked through a camera network. The system produces a personalized DVD souvenir at the end of a visitor’s stay allowing visitors to relive their experiences. We analyze how we identify visitors by fusing facial and body features, how we track visitors, how the tracker recovers from failures due to occlusions, as well as how we annotate and compile the final product. Our experiments demonstrate the feasibility of the proposed approach

    How Hume and Mach Helped Einstein Find Special Relativity

    Get PDF
    In recounting his discovery of special relativity, Einstein recalled a debt to the philosophical writings of Hume and Mach. I review the path Einstein took to special relativity and urge that, at a critical juncture, he was aided decisively not by any specific doctrine of space and time, but by a general account of concepts that Einstein found in Hume and Mach’s writings. That account required that concepts, used to represent the physical, must be properly grounded in experience. In so far as they extended beyond that grounding, they were fictional and to be abjured (Mach) or at best tolerated (Hume). Einstein drew a different moral. These fictional concepts revealed an arbitrariness in our physical theorizing and may still be introduced through freely chosen definitions, as long as these definitions do not commit us to false presumptions. After years of failed efforts to conform electrodynamics to the principle of relativity and with his frustration mounting, Einstein applied this account to the concept of simultaneity. The resulting definition of simultaneity provided the reconceptualization that solved the problem in electrodynamics and led directly to the special theory of relativity
    corecore