3 research outputs found

    Object Localization and Recognition for Mobile Robots with Online Learning based on Mixture of Projected Gaussian

    Get PDF
    One of the primary capabilities required by autonomous robots is recognizing the surrounding environment with high responsiveness, often combined with object recognition and grasping tasks. Moreover robots acting in mutable scenarios are also required to be capable of learning new object models online. Along with peculiar requirements the robotics offers to the object recognition task some unique advantages, such the robot capability to move in the environment. Moreover, usually an autonomous robot can relax the recognition precision obtained at the beginning of its exploration and favour the speed at which this results are obtained. The aim of the work presented in this thesis is to explore a new object recognition method able to exploit this advantages in order to fulfil the features required by autonomous robotics. In order enhance pose estimation the proposed algorithm prioritize the keeping of the geometrical information from the objects shape and texture. Since the object models also need to be as much lightweight as possible this algorithm relies on local 6 DoF features extraction to describe the object appearance without load the final model of unnecessary information. Once the 6 DoF keypoints are obtained, the proposed method makes the use specifically designed probability distribution, namely the the Mixture of Projected Gaussian (MoPG) in order to learn their spatial distribution. A Bag of Words (BoW) technique has been introduced after the feature detection in order make feature descriptors more invariant to small appearance changes, due to light conditions or perspective distortions. The choice of using the MoPG distribution lies in one algebraic property of the Gaussian function, namely its closure over the convolution operator. In this thesis this property is exploited in order to obtain a closed form formula for calculating the cross-correlation of MoPG. The recognition algorithm makes use of the cross-correlation between MoPG in order to both identify and localize objects in the scene. The recognition and localization performances of the proposed technique was validated on two different publicly available datasets, namely the RGB-D Dataset and the BigBIRD Dataset. An analysis of both category and instance recognition results is presented and the emerged advantages or the issues of the proposed technique are discussed. The localization error (2 degrees) and the instance recognition rate (91%) resulted being aligned of the state of art thus justifying a further exploration of the proposed method. The topics presented in this thesis was further explored in some related works. In particular a collaboration with the Intelligent Systems Research Institute (Sungkyunkwan University, Republic of Corea) led an adapted version of the proposed method that has been successfully integrated in an autonomous domestic robot

    GMM-based Single-joint Angle Estimation using EMG signals

    No full text
    In this paper we explored the possibility to use Electromyography (EMG) to train a Gaussian Mixture Model (GMM) in order to estimate the bending angle of a single human joint. In particular, EMG signals from eight leg muscles and the knee joint angle were acquired during a kick task from three different subjects. GMM was validated on new unseen data and the classification performances were compared with respect to the number of EMG channels and the number of collected trials used during the training phase. Results showed that our framework is able to achieve high performances even using few EMG channels (normalized mean square error: 0.96, 0.98, 0.98 for the three subjects, respectively) and with a small training dataset, opening new and interesting perspectives for applications to humanoid robots and exoskeletons

    Towards Remote Gait Analysis: Combining Physics and Probabilistic Models for Estimating Human Joint Mechanics

    Get PDF
    The connected health movement and remote patient monitoring promise to revolutionize patient care in multiple clinical contexts. In orthopedics, continuous monitoring of human joint and muscle tissue loading in free-living conditions will enable novel insight concerning musculoskeletal disease etiology. These developments are necessary for comprehensive patient characterization, progression monitoring, and personalized therapy. This vision has motivated many recent advances in wearable sensor-based algorithm development that aim to perform biomechanical analyses traditionally restricted to confined laboratory spaces. However, these techniques have not translated to practical deployment for remote monitoring. Several barriers to translation have been identified including complex sensor arrays. Thus, the aim of this work was to lay the foundation for remote gait analysis and techniques for estimating clinically relevant biomechanics with a reduced sensor array. The first step in this process was to develop an open-source platform that generalized the processing pipeline for automated remote biomechanical analysis. The clinical utility of the platform was demonstrated for monitoring patient gait following knee surgery using continuous recordings of thighworn accelerometer data and rectus femoris electromyograms (EMG) during free-living conditions. Individual walking bouts were identified from which strides were extracted and characterized for patient evaluation. A novel, multifactorial asymmetry index was proposed based on temporal, EMG, and kinematic descriptors of gait that was able to differentiate between patients at different stages of recovery and that was more sensitive to recovery time than were indices of cumulative physical activity. The remainder of the work focused on algorithms for estimating joint moment and simulating muscle contraction dynamics using a reduced sensor array. A hybrid technique was proposed that combined both physics and probabilistic models in a complementary fashion. Specifically, the notion of a muscle synergy function was introduced that describes the mapping between excitations from a subset of muscles and excitations from other synergistic muscles. A novel model of these synergy functions was developed that enabled estimation of unmeasured muscle excitations using a measured subset. Data from thigh- and shank-worn inertial sensors were used to estimate segment kinematics and muscle-tendon unit (MTU) lengths using physics-based techniques and a model of the musculoskeletal geometry. These estimates of muscle excitation and MTU length were used as inputs for EMG-driven simulation of muscle contraction. Estimates of muscle force, power, and work as well as net joint moment from the proposed hybrid technique were compared to estimates from laboratory-based techniques. This presents the first sensor-only (four EMG and two inertial sensors) simulation of muscle contraction dynamics and joint moment estimation using machine learning only for estimating unmeasured muscle excitations. This work provides the basis for automated remote biomechanical analysis with reduced sensor arrays; from raw sensor recordings to estimates of muscle moment, force, and power. The proposed hybrid technique requires data from only four EMG and two inertial sensors and work has begun to seamlessly integrate these sensors into a knee brace for monitoring patients following knee surgery. Future work should build on these developments including further validation and design of methods utilizing remotely and longitudinally observed biomechanics for prognosis and optimizing patient-specific interventions
    corecore