16 research outputs found

    SHREC 2022 track on online detection of heterogeneous gestures

    Get PDF
    This paper presents the outcomes of a contest organized to evaluate methods for the online recognition of heterogeneous gestures from sequences of 3D hand poses. The task is the detection of gestures belonging to a dictionary of 16 classes characterized by different pose and motion features. The dataset features continuous sequences of hand tracking data where the gestures are interleaved with non-significant motions. The data have been captured using the Hololens 2 finger tracking system in a realistic use-case of mixed reality interaction. The evaluation is based not only on the detection performances but also on the latency and the false positives, making it possible to understand the feasibility of practical interaction tools based on the algorithms proposed. The outcomes of the contest's evaluation demonstrate the necessity of further research to reduce recognition errors, while the computational cost of the algorithms proposed is sufficiently low

    SHREC 2022 Track on Online Detection of Heterogeneous Gestures

    Get PDF
    This paper presents the outcomes of a contest organized to evaluate methods for the online recognition of heterogeneous gestures from sequences of 3D hand poses. The task is the detection of gestures belonging to a dictionary of 16 classes characterized by different pose and motion features. The dataset features continuous sequences of hand tracking data where the gestures are interleaved with non-significant motions. The data have been captured using the Hololens 2 finger tracking system in a realistic use-case of mixed reality interaction. The evaluation is based not only on the detection performances but also on the latency and the false positives, making it possible to understand the feasibility of practical interaction tools based on the algorithms proposed. The outcomes of the contest's evaluation demonstrate the necessity of further research to reduce recognition errors, while the computational cost of the algorithms proposed is sufficiently low.Comment: Accepted on Computer & Graphics journa

    One Network to Segment Them All:A General, Lightweight System for Accurate 3D Medical Image Segmentation

    No full text
    Many recent medical segmentation systems rely on powerful deep learning models to solve highly specific tasks. To maximize performance, it is standard practice to evaluate numerous pipelines with varying model topologies, optimization parameters, pre- & postprocessing steps, and even model cascades. It is often not clear how the resulting pipeline transfers to different tasks. We propose a simple and thoroughly evaluated deep learning framework for segmentation of arbitrary medical image volumes. The system requires no task-specific information, no human interaction and is based on a fixed model topology and a fixed hyperparameter set, eliminating the process of model selection and its inherent tendency to cause method-level over-fitting. The system is available in open source and does not require deep learning expertise to use. Without task-specific modifications, the system performed better than or similar to highly specialized deep learning methods across 3 separate segmentation tasks. In addition, it ranked 5-th and 6-th in the first and second round of the 2018 Medical Segmentation Decathlon comprising another 10 tasks. The system relies on multi-planar data augmentation which facilitates the application of a single 2D architecture based on the familiar U-Net. Multi-planar training combines the parameter efficiency of a 2D fully convolutional neural network with a systematic train- and test-time augmentation scheme, which allows the 2D model to learn a representation of the 3D image volume that fosters generalization

    Dynamic Multi-object Gaussian Process Models

    No full text
    International audienceStatistical shape models (SSMs) are state-of-the-art medical image analysis tools for extracting and explaining shape across a set of biological structures. A combined analysis of shape and pose variation would provide additional utility in medical image analysis tasks such as automated multi-organ segmentation and completion of partial data. However, a principled and robust way to combine shape and pose features has been illusive due to three main issues: 1) non-homogeneity of the data (data with linear and non-linear natural variation across features), 2) non-optimal representation of the 3D Euclidean motion (rigid transformation representations that are not proportional to the kinetic energy that moves an object from one position to the other), and 3) artificial discretization of the models. Here, we propose a new dynamic multi-object statistical modelling framework for the analysis of human joints in a continuous domain. Specifically, we propose to normalise shape and dynamic spatial features in the same linearized statistical space, permitting the use of linear statistics; and we adopt an optimal 3D Euclidean motion representation for more accurate rigid transformation comparisons. The method affords an efficient generative dynamic multi-object modelling platform for biological joints. We validate the method using controlled synthetic data. The shape-pose prediction results suggest that the novel concept may have utility for a range of medical image analysis applications including management of human joint disorders

    Learning Shape Priors from Pieces

    No full text
    Point Distribution Models (PDM) require a dataset in which point-to-point correspondence between the individual shapes has been established. However, in the medical domain, minimising radiation exposure and pathological deformations are reasons why healthy anatomies are often only available as partial observations. To exploit the partial shapes for learning shape models, previous methods required at least a few complete shapes and, either a robust registration method or a robust learning algorithm. Our proposed method implements the idea of multiple imputations from Bayesian statistics. We learn a PDM from a dataset consisting of only incomplete shapes and a single full template. For this, we first estimate the posterior distribution of point-to-point registrations for each partial observation. Then we construct the PDM from the set of registration distributions. We quantitatively evaluate our method on a 2D dataset of hands and a 3D dataset of femurs with known ground-truth. Furthermore, we showcase how to use our method on only partial clinical data to build a 3D statistical model of the human skull. The code is made open-source and the synthetic dataset publicly available
    corecore