3 research outputs found

    CubeNet: Equivariance to 3D Rotation and Translation

    Full text link
    3D Convolutional Neural Networks are sensitive to transformations applied to their input. This is a problem because a voxelized version of a 3D object, and its rotated clone, will look unrelated to each other after passing through to the last layer of a network. Instead, an idealized model would preserve a meaningful representation of the voxelized object, while explaining the pose-difference between the two inputs. An equivariant representation vector has two components: the invariant identity part, and a discernable encoding of the transformation. Models that can't explain pose-differences risk "diluting" the representation, in pursuit of optimizing a classification or regression loss function. We introduce a Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions. We call this network CubeNet, reflecting its cube-like symmetry. By construction, this network helps preserve a 3D shape's global and local signature, as it is transformed through successive layers. We apply this network to a variety of 3D inference problems, achieving state-of-the-art on the ModelNet10 classification challenge, and comparable performance on the ISBI 2012 Connectome Segmentation Benchmark. To the best of our knowledge, this is the first 3D rotation equivariant CNN for voxel representations.Comment: Preprin

    Detecting dressing failures using temporal–relational visual grammars

    Get PDF
    Evaluation of dressing activities is essential in the assessment of the performance of patients with psycho-motor impairments. However, the current practice of monitoring dressing activity (performed by the patients in front of the therapist) has a number of disadvantages when considering the personal nature of dressing activity as well as inconsistencies between the recorded performance of the activity and performance of the same activity carried out in the patients’ natural environment, such as their home. As such, a system that can evaluate dressing activities automatically and objectively would alleviate some of these issues. However, a number of challenges arise, including difficulties in correctly identifying garments, their position in the body (partially of fully worn) and their position in relation to other garments. To address these challenges, we have developed a novel method based on visual grammars to automatically detect dressing failures and explain the type of failure. Our method is based on the analysis of image sequences of dressing activities and only requires availability of a video recording device. The analysis relies on a novel technique which we call temporal–relational visual grammar; it can reliably recognize temporal dressing failures, while also detecting spatial and relational failures. Our method achieves 91% precision in detecting dressing failures performed by 11 subjects. We explain these results and discuss the challenges encountered during this work
    corecore