14 research outputs found

    Motion from "X" by Compensating "Y"

    Get PDF
    This paper analyzes the geometry of the visual motion estimation problem in relation to transformations of the input (images) that stabilize particular output functions such as the motion of a point, a line and a plane in the image. By casting the problem within the popular "epipolar geometry", we provide a common framework for including constraints such as point, line of plane fixation by just considering "slices" of the parameter manifold. The models we provide can be used for estimating motion from a batch using the preferred optimization techniques, or for defining dynamic filters that estimate motion from a causal sequence. We discuss methods for performing the necessary compensation by either controlling the support of the camera or by pre-processing the images. The compensation algorithms may be used also for recursively fitting a plane in 3-D both from point-features or directly from brightness. Conversely, they may be used for estimating motion relative to the plane independent of its parameters

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 1: Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of different models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The "natural" dynamic model, derived by the rigidity constraint and the perspective projection, is first reduced by explicitly decoupling structure (depth) from motion. Then implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for all models seen so far in the literature, but we can also derive novel ones

    Reducing “Structure from Motion”: a general framework for dynamic vision. 1. Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of apparently unrelated models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The “natural” dynamic model, derived from the rigidity constraint and the projection model, is first reduced by explicitly decoupling structure (depth) from motion. Then, implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for models seen so far in the literature, but we can also derive novel ones

    Reducing “Structure from Motion”: a general framework for dynamic vision. 2. Implementation and experimental assessment

    Get PDF
    For pt.1 see ibid., p.933-42 (1998). A number of methods have been proposed in the literature for estimating scene-structure and ego-motion from a sequence of images using dynamical models. Despite the fact that all methods may be derived from a “natural” dynamical model within a unified framework, from an engineering perspective there are a number of trade-offs that lead to different strategies depending upon the applications and the goals one is targeting. We want to characterize and compare the properties of each model such that the engineer may choose the one best suited to the specific application. We analyze the properties of filters derived from each dynamical model under a variety of experimental conditions, assess the accuracy of the estimates, their robustness to measurement noise, sensitivity to initial conditions and visual angle, effects of the bas-relief ambiguity and occlusions, dependence upon the number of image measurements and their sampling rate

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 2: Experimental Evaluation

    Get PDF
    A number of methods have been proposed in the literature for estimating scene-structure and ego-motion from a sequence of images using dynamical models. Although all methods may be derived from a "natural" dynamical model within a unified framework, from an engineering perspective there are a number of trade-offs that lead to different strategies depending upon the specific applications and the goals one is targeting. Which one is the winning strategy? In this paper we analyze the properties of the dynamical models that originate from each strategy under a variety of experimental conditions. For each model we assess the accuracy of the estimates, their robustness to measurement noise, sensitivity to initial conditions and visual angle, effects of the bas-relief ambiguity and occlusions, dependence upon the number of image measurements and their sampling rate

    A variational technique for three-dimensional reconstruction of local structure

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 1999.Includes bibliographical references (leaves 66-70).by Eric Raphaël Amram.S.M

    Reducing "Structure from Motion": a general framework for dynamic vision. 2. Implementation and experimental assessment

    Full text link

    Humanistic Computing: WearComp as a New Framework and Application for Intelligent Signal Processing

    Get PDF
    Humanistic computing is proposed as a new signal processing framework in which the processing apparatus is inextricably intertwined with the natural capabilities of our human body and mind. Rather than trying to emulate human intelligence, humanistic computing recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications (within the domain of personal technologies) that can make use of this excellent but often overlooked processor. The emphasis of this paper is on personal imaging applications of humanistic computing, to take a first step toward an intelligent wearable camera system that can allow us to effortlessly capture our day-to-day experiences, help us remember and see better, provide us with personal safety through crime reduction, and facilitate new forms of communication through collective connected humanistic computing. The author’s wearable signal processing hardware, which began as a cumbersome backpackbased photographic apparatus of the 1970’s and evolved into a clothing-based apparatus in the early 1980’s, currently provides the computational power of a UNIX workstation concealed within ordinary-looking eyeglasses and clothing. Thus it may be worn continuously during all facets of ordinary day-to-day living, so that, through long-term adaptation, it begins to function as a true extension of the mind and body

    Creating virtual environment by 3D computer vision techniques.

    Get PDF
    Lao Tze Kin Jackie.Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.Includes bibliographical references (leaves 83-87).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- 3D Modeling using Active Contour --- p.3Chapter 1.2 --- Rectangular Virtual Environment Construction --- p.5Chapter 1.3 --- Thesis Contribution --- p.7Chapter 1.4 --- Thesis Outline --- p.7Chapter 2 --- Background --- p.9Chapter 2.1 --- Panoramic Representation --- p.9Chapter 2.1.1 --- Static Mosaic --- p.10Chapter 2.1.2 --- Advanced Mosaic Representation --- p.15Chapter 2.1.3 --- Panoramic Walkthrough --- p.17Chapter 2.2 --- Active Contour Model --- p.24Chapter 2.2.1 --- Parametric Active Contour Model --- p.28Chapter 2.3 --- 3D Shape Estimation --- p.29Chapter 2.3.1 --- Model Formation with both intrinsic and extrinsic parameters --- p.29Chapter 2.3.2 --- Model Formation with only Intrinsic Parameter and Epipo- lar Geometry --- p.32Chapter 3 --- 3D Object Modeling using Active Contour --- p.39Chapter 3.1 --- Point Acquisition Through Active Contour --- p.40Chapter 3.2 --- Object Segmentation and Panorama Generation --- p.43Chapter 3.2.1 --- Object Segmentation --- p.44Chapter 3.2.2 --- Panorama Construction --- p.44Chapter 3.3 --- 3D modeling and Texture Mapping --- p.45Chapter 3.3.1 --- Texture Mapping From Parameterization --- p.46Chapter 3.4 --- Experimental Results --- p.48Chapter 3.4.1 --- Experimental Error --- p.49Chapter 3.4.2 --- Comparison between Virtual 3D Model with Actual Model --- p.54Chapter 3.4.3 --- Comparison with Existing Techniques --- p.55Chapter 3.5 --- Discussion --- p.55Chapter 4 --- Rectangular Virtual Environment Construction --- p.57Chapter 4.1 --- Rectangular Environment Construction using Traditional (Hori- zontal) Panoramic Scenes --- p.58Chapter 4.1.1 --- Image Manipulation --- p.59Chapter 4.1.2 --- Panoramic Mosaic Creation --- p.59Chapter 4.1.3 --- Measurement of Panning Angles --- p.61Chapter 4.1.4 --- Estimate Side Ratio --- p.62Chapter 4.1.5 --- Wireframe Modeling and Cylindrical Projection --- p.63Chapter 4.1.6 --- Experimental Results --- p.66Chapter 4.2 --- Rectangular Environment Construction using Vertical Panoramic Scenes --- p.67Chapter 4.3 --- Building virtual environments for complex scenes --- p.73Chapter 4.4 --- Comparison with Existing Techniques --- p.75Chapter 4.5 --- Discussion and Future Directions --- p.77Chapter 5 --- System Integration --- p.79Chapter 6 --- Conclusion --- p.81Bibliography --- p.8

    Humanistic computing: "WearComp" as a new framework and application for intelligent signal processing

    Full text link
    corecore