3,553 research outputs found

    Cage-based Motion Recovery using Manifold Learning

    Get PDF
    International audienceWe present a flexible model-based approach for the recovery of parameterized motion from a sequence of 3D meshes without temporal coherence. Unlike previous model-based approaches using skeletons, we embed the deformation of a reference mesh template within a low polygonal representation of the mesh, namely the cage, using Green Coordinates. The advantage is a less constrained model that more robustly adapts to noisy observations while still providing structured motion information, as required by several applications. The cage is parameterized with a set of 3D features dedicated to the description of human morphology. This allows to formalize a novel representation of 3D meshed and articulated characters, the Oriented Quads Rigging (OQR). To regularize the tracking, the OQR space is subsequently constrained to plausible poses using manifold learning. Results are shown for sequences of meshes, with and without temporal coherence, obtained from multiple view videos preprocessed by visual hull. Motion recovery applications are illustrated with a motion transfer encoding and the extraction of trajectories of anatomical joints. Validation is performed on the HumanEva II database

    More than a Million Ways to Be Pushed: A High-Fidelity Experimental Dataset of Planar Pushing

    Get PDF
    Pushing is a motion primitive useful to handle objects that are too large, too heavy, or too cluttered to be grasped. It is at the core of much of robotic manipulation, in particular when physical interaction is involved. It seems reasonable then to wish for robots to understand how pushed objects move. In reality, however, robots often rely on approximations which yield models that are computable, but also restricted and inaccurate. Just how close are those models? How reasonable are the assumptions they are based on? To help answer these questions, and to get a better experimental understanding of pushing, we present a comprehensive and high-fidelity dataset of planar pushing experiments. The dataset contains timestamped poses of a circular pusher and a pushed object, as well as forces at the interaction.We vary the push interaction in 6 dimensions: surface material, shape of the pushed object, contact position, pushing direction, pushing speed, and pushing acceleration. An industrial robot automates the data capturing along precisely controlled position-velocity-acceleration trajectories of the pusher, which give dense samples of positions and forces of uniform quality. We finish the paper by characterizing the variability of friction, and evaluating the most common assumptions and simplifications made by models of frictional pushing in robotics.Comment: 8 pages, 10 figure

    A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

    Get PDF
    3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field, which is naturally data-driven. We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods. Both traditional methods and recent neural network based methods are reviewed

    Persistence of neuronal representations through time and damage in the hippocampus

    Get PDF
    How do neurons encode long-term memories? Bilateral imaging of neuronal activity in the mouse hippocampus reveals that, from one day to the next, ~40% of neurons change their responsiveness to cues, but thereafter only 1% of cells change per day. Despite these changes, neuronal responses are resilient to a lack of exposure to a previously completed task or to hippocampus lesions. Unlike individual neurons, the responses of which change after a few days, groups of neurons with inter- and intrahemispheric synchronous activity show stable responses for several weeks. The likelihood that a neuron maintains its responsiveness across days is proportional to the number of neurons with which its activity is synchronous. Information stored in individual neurons is relatively labile, but it can be reliably stored in networks of synchronously active neurons

    Video-based methodology for markerless human motion analysis

    No full text
    International audienceThis study presents a video-based experiment for the study of markerless human motion. Silhouettes are extracted from a multi-camera video system to reconstruct a 3D mesh for each frame using a reconstruction method based on visual hull. For comparison with traditional motion analysis results, we set up an experiment integrating video recordings from 8 video cameras and a marker-based motion capture system (Vicon™). Our preliminary data provided distances between the 3D trajectories from the Vicon system and the 3D mesh extracted from the video cameras. In the long term, the main ambition of this method is to provide measurement of skeleton motion for human motion analyses while eliminating markers

    Video-based methodology for markerless human motion analysis

    Get PDF
    International audienceThis study presents a video-based experiment for the study of markerless human motion. Silhouettes are extracted from a multi-camera video system to reconstruct a 3D mesh for each frame using a reconstruction method based on visual hull. For comparison with traditional motion analysis results, we set up an experiment integrating video recordings from 8 video cameras and a marker-based motion capture system (Vicon™). Our preliminary data provided distances between the 3D trajectories from the Vicon system and the 3D mesh extracted from the video cameras. In the long term, the main ambition of this method is to provide measurement of skeleton motion for human motion analyses while eliminating markers

    Sensors for Vital Signs Monitoring

    Get PDF
    Sensor technology for monitoring vital signs is an important topic for various service applications, such as entertainment and personalization platforms and Internet of Things (IoT) systems, as well as traditional medical purposes, such as disease indication judgments and predictions. Vital signs for monitoring include respiration and heart rates, body temperature, blood pressure, oxygen saturation, electrocardiogram, blood glucose concentration, brain waves, etc. Gait and walking length can also be regarded as vital signs because they can indirectly indicate human activity and status. Sensing technologies include contact sensors such as electrocardiogram (ECG), electroencephalogram (EEG), photoplethysmogram (PPG), non-contact sensors such as ballistocardiography (BCG), and invasive/non-invasive sensors for diagnoses of variations in blood characteristics or body fluids. Radar, vision, and infrared sensors can also be useful technologies for detecting vital signs from the movement of humans or organs. Signal processing, extraction, and analysis techniques are important in industrial applications along with hardware implementation techniques. Battery management and wireless power transmission technologies, the design and optimization of low-power circuits, and systems for continuous monitoring and data collection/transmission should also be considered with sensor technologies. In addition, machine-learning-based diagnostic technology can be used for extracting meaningful information from continuous monitoring data
    • …
    corecore