200 research outputs found

    Code Prediction by Feeding Trees to Transformers

    Full text link
    We advance the state-of-the-art in the accuracy of code prediction (next token prediction) used in autocomplete systems. First, we report that using the recently proposed Transformer architecture even out-of-the-box outperforms previous neural and non-neural systems for code prediction. We then show that by making the Transformer architecture aware of the syntactic structure of code, we further increase the margin by which a Transformer-based system outperforms previous systems. With this, it outperforms the accuracy of an RNN-based system (similar to Hellendoorn et al. 2018) by 18.3\%, the Deep3 system (Raychev et al 2016) by 14.1\%, and an adaptation of Code2Seq (Alon et al., 2018) for code prediction by 14.4\%. We present in the paper several ways of communicating the code structure to the Transformer, which is fundamentally built for processing sequence data. We provide a comprehensive experimental evaluation of our proposal, along with alternative design choices, on a standard Python dataset, as well as on a Facebook internal Python corpus. Our code and data preparation pipeline will be available in open source

    Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis

    Full text link
    The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) We extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis'). The manuscript is available from following link (https://doi.org/10.1016/j.media.2019.06.005

    Extreme leg motion analysis of professional ballet dancers via MRI segmentation of multiple leg postures

    Get PDF
    Purpose: Professional ballet dancers are subject to constant extreme motion which is known to be at the origin of many articular disorders. To analyze their extreme motion, we exploit a unique magnetic resonance imaging (MRI) protocol, denoted as ‘dual-posture' MRI, which scans the subject in both the normal (supine) and extreme (split) postures. However, due to inhomogeneous tissue intensities and image artifacts in these scans, coupled with unique acquisition protocol (split posture), segmentation of these scans is difficult. We present a novel algorithm that exploits the correlation between scans (bone shape invariance, appearance similarity) in automatically segmenting the dancer MRI images. Methods: While validated segmentation algorithms are available for standard supine MRI, these algorithms cannot be applied to the split scan which exhibits a unique posture and strong inter-subject variations. In this study, the supine MRI is segmented with a deformable models method. The appearance and shape of the segmented supine models are then re-used to segment the split MRI of the same subject. Models are first registered to the split image using a novel constrained global optimization, before being refined with the deformable models technique. Results: Experiments with 10 dual-posture MRI datasets in the segmentation of left and right femur bones reported accurate and robust results (mean distance error: 1.39 ± 0.31mm). Conclusions: The use of segmented models from the supine posture to assist the split posture segmentation was found to be equally accurate and consistent to supine results. Our results suggest that dual-posture MRI can be efficiently and robustly segmente
    • …
    corecore