5 research outputs found

    Three-Dimensional Motion Estimation of Objects for Video Coding

    Get PDF
    Three-dimensional (3-D) motion estimation is applied to the problem of motion compensation for video coding. We suppose that the video sequence consists of the perspective projections of a collection of rigid bodies which undergo a rototranslational motion. Motion compensation can be performed on the sequence once the shape of the objects and the motion parameters are determined. We show that the motion equations of a rigid body can be formulated as a nonlinear dynamic system whose state is represented by the motion parameters and by the scaled depths of the object feature points. An extended Kalman filter is used to estimate both the motion and the object shape parameters simultaneously. The inclusion of the shape parameters in the estimation procedure adds a set of constraints to the filter equations that appear to be essential for reliable motion estimation. Our experiments show that the proposed approach gives two advantages. First, the filter can give more reliable estimates in the presence of measurement noise in comparison with other motion estimators that separately compute motion and structure. Second, the filter can efficiently track abrupt motion changes. Moreover, the structure imposed by the model implies that the reconstructed motion is very natural as opposed to more common block-based schemes. Also, the parameterization of the model allows for a very efficient coding of the motion informatio

    Models for Motion Perception

    Get PDF
    As observers move through the environment or shift their direction of gaze, the world moves past them. In addition, there may be objects that are moving differently from the static background, either rigid-body motions or nonrigid (e.g., turbulent) ones. This dissertation discusses several models for motion perception. The models rely on first measuring motion energy, a multi-resolution representation of motion information extracted from image sequences. The image flow model combines the outputs of a set of spatiotemporal motion-energy filters to estimate image velocity, consonant with current views regarding the neurophysiology and psychophysics of motion perception. A parallel implementation computes a distributed representation of image velocity that encodes both a velocity estimate and the uncertainty in that estimate. In addition, a numerical measure of image-flow uncertainty is derived. The egomotion model poses the detection of moving objects and the recovery of depth from motion as sensor fusion problems that necessitate combining information from different sensors in the presence of noise and uncertainty. Image sequences are segmented by finding image regions corresponding to entire objects that are moving differently from the stationary background. The turbulent flow model utilizes a fractal-based model of turbulence, and estimates the fractal scaling parameter of fractal image sequences from the outputs of motion-energy filters. Some preliminary results demonstrate the model\u27s potential for discriminating image regions based on fractal scaling

    Motion Analysis I: Basic Theorems, Constraints, Equations, Principles and Algorithms

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    Get PDF
    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics

    An examination of the causes of heading bias in computer simulated self-motion

    Get PDF
    A series of experiments were devised in order to examine aspects of human visual performance during simulated self-motion. The experimental stimuli were computer simulations of observer translational motion through a 3-D random dot cloud. Experiments were specifically designed to obtain data regarding the problem of bias in judgments of heading, and to determine the influence of various experimental factors upon the bias. A secondary aim was to use these results to develop a workable computer model to predict such bias in heading estimation. Heading bias has been known for many years, but it is generally assumed only to be a problem for complex observer motion. However, the current work involved simple observer translation, and found a significant amount of heading bias. A wide variety of experimental factors were examined, and it was found that scene depth and speed had the greatest effect upon the accuracy of heading estimates, with a faster speed or smaller depth reducing bias. It was proposed that yaw eye movements, driven by the rotational component of radial flow, were responsible for the bias. An adaptation of the Perrone (1992) model of heading was used to model this, and a highly significant correlation was obtained between the experimental data and the predictions of the model
    corecore