551 research outputs found

    Biomedical Data Classification with Improvised Deep Learning Architectures

    Get PDF
    With the rise of very powerful hardware and evolution of deep learning architectures, healthcare data analysis and its applications have been drastically transformed. These transformations mainly aim to aid a healthcare personnel with diagnosis and prognosis of a disease or abnormality at any given point of healthcare routine workflow. For instance, many of the cancer metastases detection depends on pathological tissue procedures and pathologist reviews. The reports of severity classification vary amongst different pathologist, which then leads to different treatment options for a patient. This labor-intensive work can lead to errors or mistreatments resulting in high cost of healthcare. With the help of machine learning and deep learning modules, some of these traditional diagnosis techniques can be improved and aid a doctor in decision making with an unbiased view. Some of such modules can help reduce the cost, shortage of an expertise, and time in identifying the disease. However, there are many other datapoints that are available with medical images, such as omics data, biomarker calculations, patient demographics and history. All these datapoints can enhance disease classification or prediction of progression with the help of machine learning/deep learning modules. However, it is very difficult to find a comprehensive dataset with all different modalities and features in healthcare setting due to privacy regulations. Hence in this thesis, we explore both medical imaging data with clinical datapoints as well as genomics datasets separately for classification tasks using combinational deep learning architectures. We use deep neural networks with 3D volumetric structural magnetic resonance images of Alzheimer Disease dataset for classification of disease. A separate study is implemented to understand classification based on clinical datapoints achieved by machine learning algorithms. For bioinformatics applications, sequence classification task is a crucial step for many metagenomics applications, however, requires a lot of preprocessing that requires sequence assembly or sequence alignment before making use of raw whole genome sequencing data, hence time consuming especially in bacterial taxonomy classification. There are only a few approaches for sequence classification tasks that mainly involve some convolutions and deep neural network. A novel method is developed using an intrinsic nature of recurrent neural networks for 16s rRNA sequence classification which can be adapted to utilize read sequences directly. For this classification task, the accuracy is improved using optimization techniques with a hybrid neural network

    A review on a deep learning perspective in brain cancer classification

    Get PDF
    AWorld Health Organization (WHO) Feb 2018 report has recently shown that mortality rate due to brain or central nervous system (CNS) cancer is the highest in the Asian continent. It is of critical importance that cancer be detected earlier so that many of these lives can be saved. Cancer grading is an important aspect for targeted therapy. As cancer diagnosis is highly invasive, time consuming and expensive, there is an immediate requirement to develop a non-invasive, cost-effective and efficient tools for brain cancer characterization and grade estimation. Brain scans using magnetic resonance imaging (MRI), computed tomography (CT), as well as other imaging modalities, are fast and safer methods for tumor detection. In this paper, we tried to summarize the pathophysiology of brain cancer, imaging modalities of brain cancer and automatic computer assisted methods for brain cancer characterization in a machine and deep learning paradigm. Another objective of this paper is to find the current issues in existing engineering methods and also project a future paradigm. Further, we have highlighted the relationship between brain cancer and other brain disorders like stroke, Alzheimer’s, Parkinson’s, andWilson’s disease, leukoriaosis, and other neurological disorders in the context of machine learning and the deep learning paradigm

    Micro-, Meso- and Macro-Connectomics of the Brain

    Get PDF
    Neurosciences, Neurolog

    From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy

    Get PDF
    Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images. In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates. The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets

    From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy

    Get PDF
    Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images. In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates. The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets

    AI in Medical Imaging Informatics: Current Challenges and Future Directions

    Get PDF
    This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine

    Brain Tumor Growth Modelling .

    Get PDF
    Prediction methods of Glioblastoma tumors growth constitute a hard task due to the lack of medical data, which is mostly related to the patients’ privacy, the cost of collecting a large medical dataset, and the availability of related notations by experts. In this thesis, we study and propose a Synthetic Medical Image Generator (SMIG) with the purpose of generating synthetic data based on Generative Adversarial Network in order to provide anonymized data. In addition, to predict the Glioblastoma multiform (GBM) tumor growth we developed a Tumor Growth Predictor (TGP) based on End to End Convolution Neural Network architecture that allows training on a public dataset from The Cancer Imaging Archive (TCIA), combined with the generated synthetic data. We also highlighted the impact of implicating a synthetic data generated using SMIG as a data augmentation tool. Despite small data size provided by TCIA dataset, the obtained results demonstrate valuable tumor growth prediction accurac

    Development of novel multimodal light-sheet fluorescence microscopes for in-vivo imaging of vertebrate organisms

    Get PDF
    The observation of biological processes in their native environments is of critical importance for life science. While substantial information can be derived from the examination of in-vitro biological samples, in-vivo studies are necessary to reveal the complexity of the dynamics happening in real-time within a living organism. Between the possible biological model choices, vertebrates represent an important family due to the various characteristics they share with the human organism. The development of an embryo, the effect of a drug, the interaction between the immune system and pathogens, and the cellular machinery activities are all examples of highly-relevant applications requiring in-vivo observations on broadly used vertebrate models such as the zebrafish and the mouse. To perform such observations, appropriate devices have been devised. Fluorescence microscopy is one of the main approaches through which specific sample structures can be detected and registered in high-contrast images. Through micro-injections or transgenic lines, a living specimen can express fluorescence and can be imaged through such microscopes. Various fluorescence microscopy techniques have been developed, such as Widefield Microscopy (WM) and Laser Scanning Confocal Microscopy (LSCM). In WM the entire sample is visualized in a single 2D image, therefore losing the depth information, while LSCM can recover the 3D information of the sample but with inherent limitations, such as phototoxicity and limited imaging speed. In the last two decades, Light-Sheet Fluorescence Microscopy (LSFM) emerged as a technique providing fast and 3D imaging, while minimizing collateral damages to the specimen. However, due to the particular configuration of the microscope’s components, LSFM setups are normally optimized for a single application. Also, sample management is not trivial, as controlling the specimen positioning and keeping it alive for a long time within the microscope needs dedicated environmental conditioning. In this thesis, I aimed at advancing the imaging flexibility of LSFM, with particular attention to sample management. The conjugation of these aspects enabled novel observations and applications on living vertebrate samples. In Chapter 1, a brief review of the concepts employed within this thesis is presented, also pointing to the main challenges that the thesis aims to solve. In Chapter 2, a new design for multimodal LSFM is presented, which enables performing different experiments with the same instrument. Particularly, high-throughput studies would benefit from this imaging paradigm, conjugating the need for fast and reproducible mounting of multiple samples with the opportunity to image them in 3D. Additionally, from this design, a transportable setup has also been implemented. With these systems, I studied the dynamics of the yolk’s microtubule network of zebrafish embryos, describing novel features and underlining the importance of live imaging to have a whole view of the sample’s peculiarities. This is described in Chapter 3. Further applications on challenging live samples have been implemented, monitoring the macrophage recruitment in zebrafish and the development of mouse embryos. For these applications, described in Chapter 4, I devised specific mounting protocols for the samples, keeping them alive during the imaging sessions. In Chapter 5, an additional LSFM system is described, which allows for recording the sub-cellular machinery in a living vertebrate sample, while avoiding its damage thanks to the devised sample mounting. Through this, single-molecule microscopy (SMM) studies, normally performed on cultured cells, can be extended to the nuclei of living zebrafish embryos, which better recapitulate the native environment where biological processes take place. Finally, Chapter 6 recapitulates the conclusions, the impacts, future integrations, and experimental procedures that would be enabled by the work resumed in this thesis.La observación de los procesos biológicos en su entorno es de vital importancia para las ciencias de la vida. Si bien se puede derivar información sustancial desde muestras biológicas in-vitro, los estudios in-vivo son necesarios para revelar la complejidad de la dinámica que ocurre, en tiempo real, dentro de un organismo vivo. Entre las posibles elecciones de modelos biológicos, los vertebrados representan una familia importante debido a las diversas características que comparten con el organismo humano. El desarrollo de un embrión, la interacción entre el sistema inmunitario y los patógenos, el efecto de un fármaco y las actividades celulares son ejemplos de aplicaciones que requieren observaciones in-vivo en modelos de vertebrados, como el pez cebra y el ratón. La microscopía de fluorescencia es uno de los principales métodos mediante los cuales se pueden grabar imágenes, de alto contraste, de estructuras biológicas específicas. Utilizando microinyecciones o líneas transgénicas, es posible inducir una expresión de proteínas fluorescentes en la muestra y entonces puede ser observada a través de dichos microscopios. Existen varias técnicas de microscopía de fluorescencia, entre ellas las más utilizadas son la microscopía ¿widefield¿ (WM) y la microscopía ¿confocal¿ (LSCM). En WM, una sola imagen en 2D representa el volumen entero de la muestra, por lo cual la información de profundidad se pierde. Por otro lado, LSCM puede recuperar la información en 3D con algunas limitaciones como la fototoxicidad y una velocidad de generación de las imágenes limitada. En las últimas dos décadas, la microscopía de fluorescencia de hoja de luz (LSFM) surgió como técnica que ofrece imágenes de manera rápidas y en 3D, y que al mismo tiempo minimiza los daños colaterales de la muestra. Sin embargo, debido a la geometría de los componentes del microscopio, las configuraciones de LSFM normalmente se optimizan para una sola aplicación. Además, la gestión de las muestras no es trivial, ya que controlar su posición y mantenerlas vivas durante largos periodos de tiempo dentro del microscopio requiere una atención especifica. En esta tesis, me propuse mejorar la versatilidad que LSFM puede ofrecer, con especial atención a la gestión de muestras vivas. La conjugación de estos aspectos permitió nuevas observaciones y nuevas aplicaciones en vertebrados vivos. En el Capítulo 1, se presenta un breve resumen de los conceptos empleados dentro de esta tesis, señalando también los principales desafíos que la tesis pretende resolver. En el Capítulo 2, se presenta un nuevo diseño para un LSFM multimodal, que permite realizar diferentes experimentos con el mismo instrumento. Los estudios de High-Throughput se beneficiarían de este diseño, ya que conjuga la necesidad de un montaje rápido y reproducible de varias muestras con las ventajas de LSFM. Además, a partir de este diseño, también se ha desarrollado un otro microscopio LSFM transportable. Con estos sistemas, se estudió la dinámica de la red de microtúbulos en embriones de pez cebra, describiendo características nuevas y acentuando la importancia de los experimentos in-vivo para obtener una visión completa de la muestra. Esto se describe en el Capítulo 3. Para realizar otras aplicaciones, como la observación de la dinámica de macrófagos en el pez cebra y del desarrollo de embriones de ratón, descritas en el Capítulo 4, se establecieron protocolos de montaje específicos para las muestras, manteniéndolas vivas durante las sesiones experimentales. En el Capítulo 5, se describe otro sistema LSFM, que permite extender los estudios de microscopía de moléculas individuales (SMM), normalmente realizados en cultivos de células, a núcleos de embriones de pez cebra vivos, que recrean mejor el entorno natural de los procesos biológicos. Finalmente, el Capítulo 6 recapitula las conclusiones, los impactos, las integraciones futuras y los procedimientos experimentales que serían posibilitados por el trabajo resumido en esta tesis.Postprint (published version
    corecore