240,068 research outputs found

    Hands-on experience with active appearance models

    Full text link

    Segmentation of Radiographs of Hands with Joint Damage Using Customized Active Appearance Models

    Get PDF
    This paper is part of a project that investigates the possibilities of automating the assessment of joint damagein hand radiographs. Our goal is to design a robust segmentationalgorithm for the hand skeleton. The algorithm is\ud based on active appearance models (AAM) [1], which have been used for hand segmentation before [2]. The results will be used in the future for radiographic assessment of rheumatoid arthritis and the early detection of joint damage. New in this work with respect to [2] is the use of multiple object warps for each individual bone in a single AAM. This method prevents modelling and reconstruction defects caused when warping overlapping objects. This makes the algorithm more robust in cases where joint damage is present. The current implementation of the model includes the metacarpals, the phalanges, and the carpal region. For a first experimental evaluation a collection of 50 hand radiographs has been gathered. The image data set was split into a training set (40) and a test set (10) in order to evaluate the algorithm’s performance. First results show that in 8 images from the test set the bone contours are detected correctly within 1.3 mm (1 STD) at 15 pixels/cm resolution. In two images not all contours are detected correctly. Possibly this is caused by extreme deviations in these images that have not yet been incorporated in the model due to a limited training set. More training examples are needed to optimize the AAM and improve the quality and reliability of the results

    Matching hand radiographs

    Get PDF
    Biometric verification and identification methods of medical images can be used to find possible inconsistencies in patient records. Such methods may also be useful for forensic research. In this work we present a method for identifying patients by their hand radiographs. We use active appearance model representations presented before [1] to extract 64 shape features per bone from the metacarpals, the proximal, and the middle phalanges. The number of features was reduced to 20 by applying principal component analysis. Subsequently, a likelihood ratio classifier [2] determines whether an image potentially belongs to another patient in the data set. Firstly, to study the symmetry between both hands, we use a likelihood-ratio classifier to match 45 left hand images to a database of 44 (matching) right hand images and vice versa. We found an average equal error probability of 6.4%, which indicates that both hand shapes are highly symmetrical. Therefore, to increase the number of samples per patient, the distinction between left and right hands was omitted. Secondly, we did multiple experiments with randomly selected training images from 24 patients. For several patients there were multiple image pairs available. Test sets were created by using the images of three different patients and 10 other images from patients that were in the training set. We estimated the equal error rate at 0.05%. Our experiments suggest that the shapes of the hand bones contain biometric information that can be used to identify persons

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Embodied Precision : Intranasal Oxytocin Modulates Multisensory Integration

    Get PDF
    © 2018 Massachusetts Institute of Technology.Multisensory integration processes are fundamental to our sense of self as embodied beings. Bodily illusions, such as the rubber hand illusion (RHI) and the size-weight illusion (SWI), allow us to investigate how the brain resolves conflicting multisensory evidence during perceptual inference in relation to different facets of body representation. In the RHI, synchronous tactile stimulation of a participant's hidden hand and a visible rubber hand creates illusory body ownership; in the SWI, the perceived size of the body can modulate the estimated weight of external objects. According to Bayesian models, such illusions arise as an attempt to explain the causes of multisensory perception and may reflect the attenuation of somatosensory precision, which is required to resolve perceptual hypotheses about conflicting multisensory input. Recent hypotheses propose that the precision of sensorimotor representations is determined by modulators of synaptic gain, like dopamine, acetylcholine, and oxytocin. However, these neuromodulatory hypotheses have not been tested in the context of embodied multisensory integration. The present, double-blind, placebo-controlled, crossover study ( N = 41 healthy volunteers) aimed to investigate the effect of intranasal oxytocin (IN-OT) on multisensory integration processes, tested by means of the RHI and the SWI. Results showed that IN-OT enhanced the subjective feeling of ownership in the RHI, only when synchronous tactile stimulation was involved. Furthermore, IN-OT increased an embodied version of the SWI (quantified as estimation error during a weight estimation task). These findings suggest that oxytocin might modulate processes of visuotactile multisensory integration by increasing the precision of top-down signals against bottom-up sensory input.Peer reviewedFinal Accepted Versio

    Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion

    Get PDF
    Identifying with a body is central to being a conscious self. The now classic “rubber hand illusion” demonstrates that the experience of body ownership can be modulated by manipulating the timing of exteroceptive(visual and tactile)body-related feedback. Moreover,the strength of this modulation is related to individual differences in sensitivity to internal bodily signals(interoception). However the interaction of exteroceptive and interoceptive signals in determining the experience of body-ownership within an individual remains poorly understood.Here, we demonstrate that this depends on the online integration of exteroceptive and interoceptive signals by implementing an innovative “cardiac rubber hand illusion” that combined computer-generated augmented-reality with feedback of interoceptive (cardiac) information. We show that both subjective and objective measures of virtual-hand ownership are enhanced by cardio-visual feedback in-time with the actual heartbeat,as compared to asynchronous feedback. We further show that these measures correlate with individual differences in interoceptive sensitivity,and are also modulated by the integration of proprioceptive signals instantiated using real-time visual remapping of finger movements to the virtual hand.Our results demonstrate that interoceptive signals directly influence the experience of body ownership via multisensory integration,and they lend support to models of conscious selfhood based on interoceptive predictive coding
    corecore