1,215,260 research outputs found

    Radon-based Structure from Motion Without Correspondences

    Get PDF
    We present a novel approach for the estimation of 3Dmotion directly from two images using the Radon transform. We assume a similarity function defined on the crossproduct of two images which assigns a weight to all feature pairs. This similarity function is integrated over all feature pairs that satisfy the epipolar constraint. This integration is equivalent to filtering the similarity function with a Dirac function embedding the epipolar constraint. The result of this convolution is a function of the five unknownmotion parameters with maxima at the positions of compatible rigid motions. The breakthrough is in the realization that the Radon transform is a filtering operator: If we assume that images are defined on spheres and the epipolar constraint is a group action of two rotations on two spheres, then the Radon transform is a convolution/correlation integral. We propose a new algorithm to compute this integral from the spherical harmonics of the similarity and Dirac functions. The resulting resolution in the motion space depends on the bandwidth we keep from the spherical transform. The strength of the algorithm is in avoiding a commitment to correspondences, thus being robust to erroneous feature detection, outliers, and multiple motions. The algorithm has been tested in sequences of real omnidirectional images and it outperforms correspondence-based structure from motion

    Structure from Articulated Motion: Accurate and Stable Monocular 3D Reconstruction without Training Data

    Full text link
    Recovery of articulated 3D structure from 2D observations is a challenging computer vision problem with many applications. Current learning-based approaches achieve state-of-the-art accuracy on public benchmarks but are restricted to specific types of objects and motions covered by the training datasets. Model-based approaches do not rely on training data but show lower accuracy on these datasets. In this paper, we introduce a model-based method called Structure from Articulated Motion (SfAM), which can recover multiple object and motion types without training on extensive data collections. At the same time, it performs on par with learning-based state-of-the-art approaches on public benchmarks and outperforms previous non-rigid structure from motion (NRSfM) methods. SfAM is built upon a general-purpose NRSfM technique while integrating a soft spatio-temporal constraint on the bone lengths. We use alternating optimization strategy to recover optimal geometry (i.e., bone proportions) together with 3D joint positions by enforcing the bone lengths consistency over a series of frames. SfAM is highly robust to noisy 2D annotations, generalizes to arbitrary objects and does not rely on training data, which is shown in extensive experiments on public benchmarks and real video sequences. We believe that it brings a new perspective on the domain of monocular 3D recovery of articulated structures, including human motion capture.Comment: 21 pages, 8 figures, 2 table

    Controling interactions in motion control systems

    Get PDF
    Design of motion control systems should take into account (a) unconstrained motion performed without interaction with environment or other systems, (b) constrained motion performed by certain functional interaction with environment or other system. Control in both cases can be formulated in terms of maintaining desired system configuration what makes essentially the same structure for common tasks: trajectory tracking, interaction force control, compliance control etc. It will be shown that the same design approach can be used for systems that maintain some functional relations like parallel robots

    Function based control for bilateral systems in tele-micromanipulation

    Get PDF
    Design of a motion control system should take into account (a) unconstrained motion performed without interaction with environment or any other system, and (b) constrained motion with system in contact with environment or other systems. Control in both cases can be formulated in terms of maintaining desired system configuration what makes essentially the same structure for common tasks: trajectory tracking, interaction force control, compliance control etc. The same design approach can be used to formulate control in bilateral systems aimed to maintain desired functional relations between human and environment through master and slave motion systems. Implementation of the methodology is currently being pursued with a custom built Tele-micromanipulation setup and preliminary results concerning force/position tracking and transparency between master and slave are clearly demonstrated

    SMC based bilateral control

    Get PDF
    Design of a motion control system should take into account (a) unconstrained motion performed without interaction with environment or other system, and (b) constrained motion with system in contact with environment or another system or has certain functional interaction with another system. Control in both cases can be formulated in terms of maintaining desired system configuration what makes essentially the same structure for common tasks: trajectory tracking, interaction force control, compliance control etc. It will be shown that the same design approach can be used for systems that maintain some functional relation – like bilateral or multilateral systems, relation among mobile robots or control of haptic systems.

    A decomposition method for non-rigid structure from motion with orthographic cameras

    Get PDF
    Session: Video Processing, Analysis and Applications + AnimationIn this paper, we propose a new approach to non-rigid structure from motion based on the trajectory basis method by decomposing the problem into two sub-problems. The existing trajectory basis method requires the number of trajectory basis vectors to be specified beforehand, and then camera motion and the non-rigid structure are recovered simultaneously. However, we observe that the camera motion can be derived from a mean shape without recovering the non-rigid structure. Hence, the camera motion can be recovered as a sub-problem to optimize an error indicator without a full recovery of the non-rigid structure or the need to pre-define the number of basis required for describing the non-rigid structure. With the camera motion recovered, the non-rigid structure can then be solved in a second sub-problem together with the determination of the basis number by minimizing another error indicator. The solutions to these two sub-problems can be combined to solve the non-rigid structure from motion problem in an automatic manner, without any need to pre-define the number of basis vectors. Experiments show that the proposed method improves the reconstruction quality of both the non-rigid structure and camera motion.postprin

    Stabilized Structure from Motion without Disparity Induces Disparity Adaptation

    Get PDF
    3D structures can be perceived based on the patterns of 2D motion signals [1, 2]. With orthographic projection of a 3D stimulus onto a 2D plane, the kinetic information can give a vivid impression of depth, but the depth order is intrinsically ambiguous, resulting in bistable or even multistable interpretations [3]. For example, an orthographic projection of dots on the surface of a rotating cylinder is perceived as a rotating cylinder with ambiguous direction of rotation [4]. We show that the bistable rotation can be stabilized by adding information, not to the dots themselves, but to their spatial context. More interestingly, the stabilized bistable motion can generate consistent rotation aftereffects. The rotation aftereffect can only be observed when the adapting and test stimuli are presented at the same stereo depth and the same retinal location, and it is not due to attentional tracking. The observed rotation aftereffect is likely due to direction-contingent disparity adaptation, implying that stimuli with kinetic depth may have activated neurons sensitive to different disparities, even though the stimuli have zero relative disparity. Stereo depth and kinetic depth may be supported by a common neural mechanism at an early stage in the visual system
    corecore