47 research outputs found

    Automatic semantic segmentation of the lumbar spine: Clinical applicability in a multi-parametric and multi-center study on magnetic resonance images

    Get PDF
    [EN] Significant difficulties in medical image segmentation include the high variability of images caused by their origin (multi-center), the acquisition protocols (multi-parametric), the variability of human anatomy, illness severity, the effect of age and gender, and notable other factors. This work addresses problems associated with the automatic semantic segmentation of lumbar spine magnetic resonance images using convolutional neural networks. We aimed to assign a class label to each pixel of an image, with classes defined by radiologists corresponding to structural elements such as vertebrae, intervertebral discs, nerves, blood vessels, and other tissues. The proposed network topologies represent variants of the U-Net architecture, and we used several complementary blocks to define the variants: three types of convolutional blocks, spatial attention models, deep supervision, and multilevel feature extractor. Here, we describe the topologies and analyze the results of the neural network designs that obtained the most accurate segmentation. Several proposed designs outperform the standard U-Net used as a baseline, primarily when used in ensembles, where the outputs of multiple neural networks are combined according to different strategies.This work was partially supported by the Regional Ministry of Health of the Valencian Region, under the MIDAS project from BIMCV Generalitat Valenciana, under the grant agreement ACIF/2018/285, and by the DeepHealth project, Deep-Learning and HPC to Boost Biomedical Applications for Health , which has received funding from the European Union s Horizon 2020 research and innovation program under grant agreement No 825111. The authors thank the Bioinformatics and Biostatistics Unit from Principe Felipe Research Center (CIPF) for providing access to the cluster co-funded by European Regional Development Funds (FEDER) in the Valencian Community 2014 2020 and by the Biomedical Imaging Mixed Unit from Fundació per al Foment de la Investigació Sanitaria i Biomedica (FISABIO) for providing access to the cluster openmind, co-funded by European Regional Development Funds (FEDER) in Valencian Community 2014 2020.Sáenz-Gamboa, JJ.; Doménech, J.; Alonso-Manjarrés, A.; Gomez, J.; De La Iglesia-Vayá, M. (2023). Automatic semantic segmentation of the lumbar spine: Clinical applicability in a multi-parametric and multi-center study on magnetic resonance images. Artificial Intelligence in Medicine. 140. https://doi.org/10.1016/j.artmed.2023.10255914

    Functional signatures in non-small-cell lung cancer: a systematic review and meta-analysis of sex-based differences in transcriptomic studies

    Get PDF
    While studies have established the existence of differences in the epidemiological and clinical patterns of lung adenocarcinoma between male and female patients, we know relatively little regarding the molecular mechanisms underlying such sex-based differences. In this study, we explore said differences through a meta-analysis of transcriptomic data. We performed a meta-analysis of the functional profiling of nine public datasets that included 1366 samples from Gene Expression Omnibus and The Cancer Genome Atlas databases. Meta-analysis results from data merged, normalized, and corrected for batch effect show an enrichment for Gene Ontology terms and Kyoto Encyclopedia of Genes and Genomes pathways related to the immune response, nucleic acid metabolism, and purinergic signaling. We discovered the overrepresentation of terms associated with the immune response, particularly with the acute inflammatory response, and purinergic signaling in female lung adenocarcinoma patients, which could influence reported clinical differences. Further evaluations of the identified differential biological processes and pathways could lead to the discovery of new biomarkers and therapeutic targets. Our findings also emphasize the relevance of sex-specific analyses in biomedicine, which represents a crucial aspect influencing biological variability in diseaseThis work was supported by Fondo de Investigación Sanitaria (ISCIII PI15-00209), GV/2020/ 186, and ISCIII PT17/0009/0015 FEDE

    A comparison of Covid-19 early detection between convolutional neural networks and radiologists

    Full text link
    [EN] Background The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience. Methods The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx. Results Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx. Conclusion The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.Project Chest screening for patients with COVID 19 (COV2000750 Special COVID19 resolution) funded by Instituto de Salud Carlos III. Project DIRAC (INNVA1/2020/42) funded by the Agencia Valenciana de la Innovacion, Generalitat Valenciana.Albiol Colomer, A.; Albiol, F.; Paredes Palacios, R.; Plasencia-Martínez, JM.; Blanco Barrio, A.; García Santos, JM.; Tortajada, S.... (2022). A comparison of Covid-19 early detection between convolutional neural networks and radiologists. Insights into Imaging. 13(1):1-12. https://doi.org/10.1186/s13244-022-01250-311213

    A comparison of Covid-19 early detection between convolutional neural networks and radiologists

    Get PDF
    Background The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience. Methods The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx. Results Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx. Conclusion The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.Project Chest screening for patients with COVID 19 (COV2000750 Special COVID19 resolution) funded by Instituto de Salud Carlos III. Project DIRAC (INNVA1/2020/42) funded by the Agencia Valenciana de la Innovación, Generalitat Valenciana.Peer reviewe

    Cov-caldas: A new COVID-19 chest X-Ray dataset from state of Caldas-Colombia

    Get PDF
    The emergence of COVID-19 as a global pandemic forced researchers worldwide in various disciplines to investigate and propose efficient strategies and/or technologies to prevent COVID-19 from further spreading. One of the main challenges to be overcome is the fast and efficient detection of COVID-19 using deep learning approaches and medical images such as Chest Computed Tomography (CT) and Chest X-ray images. In order to contribute to this challenge, a new dataset was collected in collaboration with “S.E.S Hospital Universitario de Caldas” (https://hospitaldecaldas.com/) from Colombia and organized following the Medical Imaging Data Structure (MIDS) format. The dataset contains 7,307 chest X-ray images divided into 3,077 and 4,230 COVID-19 positive and negative images. Images were subjected to a selection and anonymization process to allow the scientific community to use them freely. Finally, different convolutional neural networks were used to perform technical validation. This dataset contributes to the scientific community by tackling significant limitations regarding data quality and availability for the detection of COVID-19. © 2022, The Author(s)

    The past, present, and future of the brain imaging data structure (BIDS)

    Get PDF
    The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neuroscience modalities. This paper is meant as a history of how the standard has developed and grown over time. We outline the principles behind the project, the mechanisms by which it has been extended, and some of the challenges being addressed as it evolves. We also discuss the lessons learned through the project, with the aim of enabling researchers in other domains to learn from the success of BIDS

    The Open Brain Consent: Informing research participants and obtaining consent to share brain imaging data

    Get PDF
    Having the means to share research data openly is essential to modern science. For human research, a key aspect in this endeavor is obtaining consent from participants, not just to take part in a study, which is a basic ethical principle, but also to share their data with the scientific community. To ensure that the participants' privacy is respected, national and/or supranational regulations and laws are in place. It is, however, not always clear to researchers what the implications of those are, nor how to comply with them. The Open Brain Consent (https://open-brain-consent.readthedocs.io) is an international initiative that aims to provide researchers in the brain imaging community with information about data sharing options and tools. We present here a short history of this project and its latest developments, and share pointers to consent forms, including a template consent form that is compliant with the EU general data protection regulation. We also share pointers to an associated data user agreement that is not only useful in the EU context, but also for any researchers dealing with personal (clinical) data elsewhere

    The past, present, and future of the Brain Imaging Data Structure (BIDS)

    Get PDF
    The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neuroscience modalities. This paper is meant as a history of how the standard has developed and grown over time. We outline the principles behind the project, the mechanisms by which it has been extended, and some of the challenges being addressed as it evolves. We also discuss the lessons learned through the project, with the aim of enabling researchers in other domains to learn from the success of BIDS

    The Past, Present, and Future of the Brain Imaging Data Structure (BIDS)

    Get PDF
    The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neuroscience modalities. This paper is meant as a history of how the standard has developed and grown over time. We outline the principles behind the project, the mechanisms by which it has been extended, and some of the challenges being addressed as it evolves. We also discuss the lessons learned through the project, with the aim of enabling researchers in other domains to learn from the success of BIDS.Development of the BIDS Standard has been supported by the International Neuroinformatics Coordinating Facility, Laura and John Arnold Foundation, National Institutes of Health (R24MH114705, R24MH117179, R01MH126699, R24MH117295, P41EB019936, ZIAMH002977, R01MH109682, RF1MH126700, R01EB020740), National Science Foundation (OAC-1760950, BCS-1734853, CRCNS-1429999, CRCNS-1912266), Novo Nordisk Fonden (NNF20OC0063277), French National Research Agency (ANR-19-DATA-0023, ANR 19-DATA-0021), Digital Europe TEF-Health (101100700), EU H2020 Virtual Brain Cloud (826421), Human Brain Project (SGA2 785907, SGA3 945539), European Research Council (Consolidator 683049), German Research Foundation (SFB 1436/425899996), SFB 1315/327654276, SFB 936/178316478, SFB-TRR 295/424778381), SPP Computational Connectomics (RI 2073/6-1, RI 2073/10-2, RI 2073/9-1), European Innovation Council PHRASE Horizon (101058240), Berlin Institute of Health & Foundation Charité, Johanna Quandt Excellence Initiative, ERAPerMed Pattern-Cog, and the Virtual Research Environment at the Charité Berlin – a node of EBRAINS Health Data Cloud.N
    corecore