10 research outputs found

    Investigation of hospital discharge cases and SARS-CoV-2 introduction into Lothian care homes

    Get PDF
    Summary Background The first epidemic wave of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) in Scotland resulted in high case numbers and mortality in care homes. In Lothian, over one-third of care homes reported an outbreak, while there was limited testing of hospital patients discharged to care homes. Aim To investigate patients discharged from hospitals as a source of SARS-CoV-2 introduction into care homes during the first epidemic wave. Methods A clinical review was performed for all patients discharges from hospitals to care homes from 1st March 2020 to 31st May 2020. Episodes were ruled out based on coronavirus disease 2019 (COVID-19) test history, clinical assessment at discharge, whole-genome sequencing (WGS) data and an infectious period of 14 days. Clinical samples were processed for WGS, and consensus genomes generated were used for analysis using Cluster Investigation and Virus Epidemiological Tool software. Patient timelines were obtained using electronic hospital records. Findings In total, 787 patients discharged from hospitals to care homes were identified. Of these, 776 (99%) were ruled out for subsequent introduction of SARS-CoV-2 into care homes. However, for 10 episodes, the results were inconclusive as there was low genomic diversity in consensus genomes or no sequencing data were available. Only one discharge episode had a genomic, time and location link to positive cases during hospital admission, leading to 10 positive cases in their care home. Conclusion The majority of patients discharged from hospitals were ruled out for introduction of SARS-CoV-2 into care homes, highlighting the importance of screening all new admissions when faced with a novel emerging virus and no available vaccine

    Investigation of hospital discharge cases and SARS-CoV-2 introduction into Lothian care homes

    Get PDF
    Background The first epidemic wave of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) in Scotland resulted in high case numbers and mortality in care homes. In Lothian, over one-third of care homes reported an outbreak, while there was limited testing of hospital patients discharged to care homes. Aim To investigate patients discharged from hospitals as a source of SARS-CoV-2 introduction into care homes during the first epidemic wave. Methods A clinical review was performed for all patients discharges from hospitals to care homes from 1st March 2020 to 31st May 2020. Episodes were ruled out based on coronavirus disease 2019 (COVID-19) test history, clinical assessment at discharge, whole-genome sequencing (WGS) data and an infectious period of 14 days. Clinical samples were processed for WGS, and consensus genomes generated were used for analysis using Cluster Investigation and Virus Epidemiological Tool software. Patient timelines were obtained using electronic hospital records. Findings In total, 787 patients discharged from hospitals to care homes were identified. Of these, 776 (99%) were ruled out for subsequent introduction of SARS-CoV-2 into care homes. However, for 10 episodes, the results were inconclusive as there was low genomic diversity in consensus genomes or no sequencing data were available. Only one discharge episode had a genomic, time and location link to positive cases during hospital admission, leading to 10 positive cases in their care home. Conclusion The majority of patients discharged from hospitals were ruled out for introduction of SARS-CoV-2 into care homes, highlighting the importance of screening all new admissions when faced with a novel emerging virus and no available vaccine

    Acceleration of brain cancer detection algorithms during surgery procedures using GPUs

    No full text
    The HypErspectraL Imaging Cancer Detection (HELICoiD) European project aims at developing a methodology for tumor tissue classification through hyperspectral imaging (HSI) techniques. This paper describes the development of a parallel implementation of the Support Vector Machines (SVMs) algorithm employed for the classification of hyperspectral (HS) images of in vivo human brain tissue. SVM has demonstrated high accuracy in the supervised classification of biological tissues, and especially in the classification of human brain tumor. In this work, both the training and the classification stages of the SVMs were accelerated using Graphics Processing Units (GPUs). The acceleration of the training stage allows incorporating new samples during the surgical procedures to create new mathematical models of the classifier. Results show that the developed system is capable to perform efficient training and real-time compliant classification

    Towards Real-Time Computing of Intraoperative Hyperspectral Imaging for Brain Cancer Detection Using Multi-GPU Platforms

    No full text
    Several causes make brain cancer identification a challenging task for neurosurgeons during the surgical procedure. The surgeons' naked eye sometimes is not enough to accurately delineate the brain tumor location and extension due to its diffuse nature that infiltrates in the surrounding healthy tissue. For this reason, a support system that provides accurate cancer delimitation is essential in order to improve the surgery outcomes and hence the patient's quality of life. The brain cancer detection system developed as part of the 'HypErspectraL Imaging Cancer Detection' (HELICoiD) European project meets this requirement exploiting a non-invasive technique suitable for medical diagnosis: the hyperspectral imaging (HSI). A crucial constraint that this system has to satisfy is providing a real-time response in order to not prolong the surgery. The large amount of data that characterizes the hyperspectral images, and the complex elaborations performed by the classification system make the High Performance Computing (HPC) systems essential to provide real-time processing. The most efficient implementation developed in this work, which exploits the Graphic Processing Unit (GPU) technology, is able to classify the biggest image of the database (worst case) in less than three seconds, largely satisfying the real-time constraint set to 1 minute for surgical procedures, becoming a potential solution to implement hyperspectral video processing in the near future

    A Novel Negative Abundance‐Oriented Hyperspectral Unmixing Algorithm

    No full text
    International audienceSpectral unmixing is a popular technique for analyzing remotely sensed hyperspectral data sets with subpixel precision. Over the last few years, many algorithms have been developed for each of the main processing steps involved in spectral unmixing (SU) under the LMM assumption: 1) estimation of the number of endmembers; 2) identification of the spectral signatures of the endmembers; and 3) estimation of the abundance of endmembers in the scene. Although this general processing chain has proven to be effective for unmixing certain types of hyperspectral images, it also has some drawbacks. The first one comes from the fact that the output of each stage is the input of the following one, which favors the propagation of errors within the unmixing chain. A second problem is the huge variability of the results obtained when estimating the number of endmembers of a hyperspectral scene with different state-of-the-art algorithms, which influences the rest of the process. A third issue is the computational complexity of the whole process. To address the aforementioned issues, this paper develops a novel negative abundance-oriented SU algorithm that covers, for the first time in the literature, the main steps involved in traditional hyperspectral unmixing chains. The proposed algorithm can also be easily adapted to a scenario in which the number of endmembers is known in advance and two additional variations of the algorithm are provided to deal with high-noise scenarios and to significantly reduce its execution time, respectively. Our experimental results, conducted using both synthetic and real hyperspectral scenes, indicate that the presented method is highly competitive (in terms of both unmixing accuracy and computational performance) with regard to other SU techniques with similar requirements, while providing a fully self-contained unmixing chain without the need for any input parameters

    Early detection of change by applying scale-space methodology to hyperspectral images

    No full text
    Abstract Given an object of interest that evolves in time, one often wants to detect possible changes in its properties. The first changes may be small and occur in different scales and it may be crucial to detect them as early as possible. Examples include identification of potentially malignant changes in skin moles or the gradual onset of food quality deterioration. Statistical scale-space methodologies can be very useful in such situations since exploring the measurements in multiple resolutions can help identify even subtle changes. We extend a recently proposed scale-space methodology to a technique that successfully detects such small changes and at the same time keeps false alarms at a very low level. The potential of the novel methodology is first demonstrated with hyperspectral skin mole data artificially distorted to include a very small change. Our real data application considers hyperspectral images used for food quality detection. In these experiments the performance of the proposed method is either superior or on par with a standard approach such as principal component analysis

    Neural Networks-Based On-Site Dermatologic Diagnosis through Hyperspectral Epidermal Images

    No full text
    Cancer originates from the uncontrolled growth of healthy cells into a mass. Chromophores, such as hemoglobin and melanin, characterize skin spectral properties, allowing the classification of lesions into different etiologies. Hyperspectral imaging systems gather skin-reflected and transmitted light into several wavelength ranges of the electromagnetic spectrum, enabling potential skin-lesion differentiation through machine learning algorithms. Challenged by data availability and tiny inter and intra-tumoral variability, here we introduce a pipeline based on deep neural networks to diagnose hyperspectral skin cancer images, targeting a handheld device equipped with a low-power graphical processing unit for routine clinical testing. Enhanced by data augmentation, transfer learning, and hyperparameter tuning, the proposed architectures aim to meet and improve the well-known dermatologist-level detection performances concerning both benign-malignant and multiclass classification tasks, being able to diagnose hyperspectral data considering real-time constraints. Experiments show 87% sensitivity and 88% specificity for benign-malignant classification and specificity above 80% for the multiclass scenario. AUC measurements suggest classification performance improvement above 90% with adequate thresholding. Concerning binary segmentation, we measured skin DICE and IOU higher than 90%. We estimated 1.21 s, at most, consuming 5 Watts to segment the epidermal lesions with the U-Net++ architecture, meeting the imposed time limit. Hence, we can diagnose hyperspectral epidermal data assuming real-time constraints

    Porting a PCA-based hyperspectral image dimensionality reduction algorithm for brain cancer detection on a manycore architecture

    No full text
    International audienceThis paper presents a study of the parallelism of a Principal Component Analysis (PCA) algorithm and its adaptation to a manycore MPPA (Massively Parallel Processor Array) architecture, which gathers 256 cores distributed among 16 clusters. This study focuses on porting hyperspectral image processing into many core platforms by optimizing their processing to fulfill real-time constraints, fixed by the image capture rate of the hyperspectral sensor. Real-time is a challenging objective for hyperspectral image processing, as hyperspectral images consist of extremely large volumes of data and this problem is often solved by reducing image size before starting the processing itself. To tackle the challenge, this paper proposes an analysis of the intrinsic parallelism of the different stages of the PCA algorithm with the objective of exploiting the parallelization possibilities offered by an MPPA manycore architecture. Furthermore, the impact on internal communication when increasing the level of parallelism, is also analyzed. Experimenting with medical images obtained from two different surgical use cases, an average speedup of 20 is achieved. Internal communications are shown to rapidly become the bottleneck that reduces the achievable speedup offered by the PCA parallelization. As a result of this study, PCA processing time is reduced to less than 6 s, a time compatible with the targeted brain surgery application requiring 1 frame-per-minute

    Super-resolution: a comprehensive survey

    No full text
    corecore