45,164 research outputs found

    Dynamic balance and walking control of biped mechanisms

    Get PDF
    The research presented here focuses on the development of a feedback control systems for locomotion of two and three dimensional, dynamically balanced, biped mechanisms. The main areas to be discussed are: development of equations of motion for multibody systems, balancing control, walking cycle generation, and interactive computer graphics. Additional topics include: optimization, interface devices, manual control methods, and ground contact force generation;Planar (2D) and spatial (3D) multibody system models are developed in this thesis to handle all allowable ground support conditions without system reconfiguration. All models consist of lower body segments only; head and arm segments are not included. Model parameters for segment length, mass, and moments of inertia are adjustable. A ground contact foot model simulates compression compliance and allows for non-uniform surfaces. In addition to flat surfaces with variable friction coefficients, the systems can adapt to inclines and steps;Control techniques are developed that range from manual torque input to automatic control for several types of balancing, walking, and transitioning modes. Balancing mode control algorithms can deal with several types of initial conditions which include falling and jumping onto various types of surfaces. Walking control state machines allow selection of steady-state velocity, step size, and/or step frequency;The real-time interactive simulation software developed during this project allows the user to operate the biped systems within a 3D virtual environment. In addition to presenting algorithms for interactive biped locomotion control, insights can also be drawn from this work into the levels of required user effort for tasks involving systems controlled by simultaneous user inputs;Position and ground reaction force data obtained from human walking studies are compared to walking data generated by one of the more complex biped models developed for this project

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    BodyNet: Volumetric Inference of 3D Human Body Shapes

    Get PDF
    Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.Comment: Appears in: European Conference on Computer Vision 2018 (ECCV 2018). 27 page

    Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image

    Full text link
    Image metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe, that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Comment: 13 pages, 11 figure
    • …
    corecore