1,031 research outputs found

    LEGION-based image segmentation by means of spiking neural networks using normalized synaptic weights implemented on a compact scalable neuromorphic architecture

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/LEGION (Locally Excitatory, Globally Inhibitory Oscillator Network) topology has demonstrated good capabilities in scene segmentation applications. However, the implementation of LEGION algorithm requires machines with high performance to process a set of complex differential equations limiting its use in practical real-time applications. Recently, several authors have proposed alternative methods based on spiking neural networks (SNN) to create oscillatory neural networks with low computational complexity and highly feasible to be implemented on digital hardware to perform adaptive segmentation of images. Nevertheless, existing SNN with LEGION configuration focus on the membrane model leaving aside the behavior of the synapses although they play an important role in the synchronization of several segments by self-adapting their weights. In this work, we propose a SNN-LEGION configuration along with normalized weight of the synapses to self-adapt the SNN network to synchronize several segments of any size and shape at the same time. The proposed SNN-LEGION method involves a global inhibitor, which is in charge of performing the segmentation process between different objects with different sizes and shapes on time. To validate the proposal, the SNN-LEGION method is implemented on an optimized scalable neuromorphic architecture. Our preliminary results demonstrate that the proposed normalization process of the synaptic weights along with the SNN-LEGION configuration keep the capacity of the LEGION network to separate the segments on time, which can be useful in video processing applications such as vision processing systems for mobile robots, offering lower computational complexity and area consumption compared with previously reported solutions.The authors would like to thank the Consejo Nacional de Ciencia y Tecnologia (CONACyT) and the IPN for the financial support to realize this work under project SIP-20180251. This work was also supported in part by the Spanish Ministry of Science and Innovation and the European Social Fund (ESF) under Projects TEC2011-27047 and TEC2015-67278-R.Peer ReviewedPostprint (author's final draft

    Automated classification of total knee replacement prosthesis on plain film radiograph using a deep convolutional neural network

    Get PDF
    The identification of the make and model of a total knee replacement (TKR) is a necessary step prior to revision surgery for periprosthetic fracture, loosening, wear or infection. Current methods may fail to correctly identify the implant up to 10% of the time. This study presents the training of a Convolutional Neural Network (CNN) to automatically identify the make and model of seven TKR implants or the absence of a TKR on plain-film radiographs. Our dataset consists of 588 anteroposterior (AP) X-rays of the knee. They were randomly divided into a train, validation and testing sets with a 50:25:25 split. A CNN based on the ResNet-18 architecture was trained with the best model selected using validation results. The final model was tested on the hold-out test dataset. The trained network demonstrated perfect accuracy in classifying a hold-out test dataset of X-rays to one of the eight labelled classes. Saliency maps demonstrated the outlines of the implants are key to a given prediction. Further research will benefit from larger datasets with more complete coverage of the possible implants. The ability to recognize that implants are outside the networks trained distribution is essential to such an algorithm operating safely in clinical practice. With these issues and limitations addressed there is potential that such an algorithm could save clinicians time and reduce instances where implants are not identified pre-operatively, simplifying re-operative cases and improving clinical outcomes

    Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network

    Get PDF
    Synchronization has been suggested as a mechanism of binding distributed feature representations facilitating segmentation of visual stimuli. Here we investigate this concept based on unsupervised learning using natural visual stimuli. We simulate dual-variable neural oscillators with separate activation and phase variables. The binding of a set of neurons is coded by synchronized phase variables. The network of tangential synchronizing connections learned from the induced activations exhibits small-world properties and allows binding even over larger distances. We evaluate the resulting dynamic phase maps using segmentation masks labeled by human experts. Our simulation results show a continuously increasing phase synchrony between neurons within the labeled segmentation masks. The evaluation of the network dynamics shows that the synchrony between network nodes establishes a relational coding of the natural image inputs. This demonstrates that the concept of binding by synchrony is applicable in the context of unsupervised learning using natural visual stimuli

    Detection of Proximal Caries at The Molar Teeth Using Edge Enhancement Algorithm

    Get PDF
    Panoramic X-Ray produces produces the most common oral digital radiographic image that it used in dentistry practice. The image can further improve accuracy compared to analog one. This study aims to establish proximal caries edge on enhancement images so they can be easily recognized. The images were obtained from the Department of Radiology, General Hospital of M. Djamil Padang Indonesia. Total file of images to be tested were 101. Firstly, the images are analyzed by dentists who practiced at Segment Padang Hospital Indonesia. They concluded that there is proximal caries in 30 molar teeth. Furthermore, the images were processed using Matlab software with the following steps, i.e. cropping, enhancement, edge detection, and edge enhancement. The accuracy rate of detection of edge enhancement images being compared with that of dentist analysis was 73.3%. In the edge enhancement images proximal caries edge can be found conclusively in 22 teeth and dubiously in eight teeth. The results of this study convinced that edge enhancement images can be recommended to assist dentists in detecting proximal caries.

    Head and Neck Cancer Primary Tumor Auto Segmentation Using Model Ensembling of Deep Learning in PET/CT Images

    Get PDF
    Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.</p

    C2FTrans: Coarse-to-Fine Transformers for Medical Image Segmentation

    Full text link
    Convolutional neural networks (CNN), the most prevailing architecture for deep-learning based medical image analysis, are still functionally limited by their intrinsic inductive biases and inadequate receptive fields. Transformer, born to address this issue, has drawn explosive attention in natural language processing and computer vision due to its remarkable ability in capturing long-range dependency. However, most recent transformer-based methods for medical image segmentation directly apply vanilla transformers as an auxiliary module in CNN-based methods, resulting in severe detail loss due to the rigid patch partitioning scheme in transformers. To address this problem, we propose C2FTrans, a novel multi-scale architecture that formulates medical image segmentation as a coarse-to-fine procedure. C2FTrans mainly consists of a cross-scale global transformer (CGT) which addresses local contextual similarity in CNN and a boundary-aware local transformer (BLT) which overcomes boundary uncertainty brought by rigid patch partitioning in transformers. Specifically, CGT builds global dependency across three different small-scale feature maps to obtain rich global semantic features with an acceptable computational cost, while BLT captures mid-range dependency by adaptively generating windows around boundaries under the guidance of entropy to reduce computational complexity and minimize detail loss based on large-scale feature maps. Extensive experimental results on three public datasets demonstrate the superior performance of C2FTrans against state-of-the-art CNN-based and transformer-based methods with fewer parameters and lower FLOPs. We believe the design of C2FTrans would further inspire future work on developing efficient and lightweight transformers for medical image segmentation. The source code of this paper is publicly available at https://github.com/xianlin7/C2FTrans
    corecore