4,590 research outputs found

    The Perception of Globally Coherent Motion

    Full text link
    How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.Air Force Office of Scientific Research (90-0175, 89-0016); National Science Foundation, Office of Naval Research, Air Force Office of Scientific Research (BNS-8908426

    Automatic lip tracking: Bayesian segmentation and active contours in a cooperative scheme

    No full text
    International audienceAn algorithm for speaker's lip contour extraction is pre- sented in this paper. A color video sequence of speaker's face is acquired, under natural lighting conditions and without any particular make-up. First, a logarithmic color transform is performed from RGB to HI (hue, intensity) color space. A bayesian approach segments the mouth area using Markov random field modelling. Motion is combined with red hue lip information into a spatiotemporal neighbourhood. Simultaneously, a Region Of Interest and relevant boundaries points are automatically extracted. Next, an active contour using spatially varying coefficients is initialised with the results of the preprocessing stage. Finally, an accurate lip shape with inner and outer borders is obtained with good quality results in this challenging situation

    Graphics for uncertainty

    Get PDF
    Graphical methods such as colour shading and animation, which are widely available, can be very effective in communicating uncertainty. In particular, the idea of a ‘density strip’ provides a conceptually simple representation of a distribution and this is explored in a variety of settings, including a comparison of means, regression and models for contingency tables. Animation is also a very useful device for exploring uncertainty and this is explored particularly in the context of flexible models, expressed in curves and surfaces whose structure is of particular interest. Animation can further provide a helpful mechanism for exploring data in several dimensions. This is explored in the simple but very important setting of spatiotemporal data

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars

    Full text link
    Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.Comment: 9 pages, 8 figures, 6 tables. Video: https://youtu.be/_r_bsjkJTH

    Multiframe Temporal Estimation of Cardiac Nonrigid Motion

    Get PDF
    A robust, flexible system for tracking the point to point nonrigid motion of the left ventricular (LV) endocardial wall in image sequences has been developed. This system is unique in its ability to model motion trajectories across multiple frames. The foundation of this system is an adaptive transversal filter based on the recursive least-squares algorithm. This filter facilitates the integration of models for periodicity and proximal smoothness as appropriate using a contour-based description of the object’s boundaries. A set of correspondences between contours and an associated set of correspondence quality measures comprise the input to the system. Frame-to-frame relationships from two different frames of reference are derived and analyzed using synthetic and actual images. Two multiframe temporal models, both based on a sum of sinusoids, are derived. Illustrative examples of the system’s output are presented for quantitative analysis. Validation of the system is performed by comparing computed trajectory estimates with the trajectories of physical markers implanted in the LV wall. Sample case studies of marker trajectory comparisons are presented. Ensemble statistics from comparisons with 15 marker trajectories are acquired and analyzed. A multiframe temporal model without spatial periodicity constraints was determined to provide excellent performance with the least computational cost. A multiframe spatiotemporal model provided the best performance based on statistical standard deviation, although at significant computational expense.National Heart, Lung, and Blood InstituteAir Force of Scientific ResearchNational Science FoundationOffice of Naval ResearchR01HL44803F49620-99-1-0481F49620-99-1-0067MIP-9615590N00014-98-1-054
    corecore