35 research outputs found

    Deep Learning for Automatic Pneumonia Detection

    Full text link
    Pneumonia is the leading cause of death among young children and one of the top mortality causes worldwide. The pneumonia detection is usually performed through examine of chest X-ray radiograph by highly-trained specialists. This process is tedious and often leads to a disagreement between radiologists. Computer-aided diagnosis systems showed the potential for improving diagnostic accuracy. In this work, we develop the computational approach for pneumonia regions detection based on single-shot detectors, squeeze-and-excitation deep convolution neural networks, augmentations and multi-task learning. The proposed approach was evaluated in the context of the Radiological Society of North America Pneumonia Detection Challenge, achieving one of the best results in the challenge.Comment: to appear in CVPR 2020 Workshops proceeding

    FPGA Implementation of Convolutional Neural Networks with Fixed-Point Calculations

    Full text link
    Neural network-based methods for image processing are becoming widely used in practical applications. Modern neural networks are computationally expensive and require specialized hardware, such as graphics processing units. Since such hardware is not always available in real life applications, there is a compelling need for the design of neural networks for mobile devices. Mobile neural networks typically have reduced number of parameters and require a relatively small number of arithmetic operations. However, they usually still are executed at the software level and use floating-point calculations. The use of mobile networks without further optimization may not provide sufficient performance when high processing speed is required, for example, in real-time video processing (30 frames per second). In this study, we suggest optimizations to speed up computations in order to efficiently use already trained neural networks on a mobile device. Specifically, we propose an approach for speeding up neural networks by moving computation from software to hardware and by using fixed-point calculations instead of floating-point. We propose a number of methods for neural network architecture design to improve the performance with fixed-point calculations. We also show an example of how existing datasets can be modified and adapted for the recognition task in hand. Finally, we present the design and the implementation of a floating-point gate array-based device to solve the practical problem of real-time handwritten digit classification from mobile camera video feed

    Angiodysplasia Detection and Localization Using Deep Convolutional Neural Networks

    Full text link
    Accurate detection and localization for angiodysplasia lesions is an important problem in early stage diagnostics of gastrointestinal bleeding and anemia. Gold-standard for angiodysplasia detection and localization is performed using wireless capsule endoscopy. This pill-like device is able to produce thousand of high enough resolution images during one passage through gastrointestinal tract. In this paper we present our winning solution for MICCAI 2017 Endoscopic Vision SubChallenge: Angiodysplasia Detection and Localization its further improvements over the state-of-the-art results using several novel deep neural network architectures. It address the binary segmentation problem, where every pixel in an image is labeled as an angiodysplasia lesions or background. Then, we analyze connected component of each predicted mask. Based on the analysis we developed a classifier that predict angiodysplasia lesions (binary variable) and a detector for their localization (center of a component). In this setting, our approach outperforms other methods in every task subcategory for angiodysplasia detection and localization thereby providing state-of-the-art results for these problems. The source code for our solution is made publicly available at https://github.com/ternaus/angiodysplasia-segmentatioComment: 12 pages, 6 figure

    Hypothesis: Caco‐2 cell rotational 3D mechanogenomic turing patterns have clinical implications to colon crypts

    Full text link
    Colon crypts are recognized as a mechanical and biochemical Turing patterning model. Colon epithelial Caco‐2 cell monolayer demonstrated 2D Turing patterns via force analysis of apical tight junction live cell imaging which illuminated actomyosin meshwork linking the actomyosin network of individual cells. Actomyosin forces act in a mechanobiological manner that alters cell/nucleus/tissue morphology. We observed the rotational motion of the nucleus in Caco‐2 cells that appears to be driven by actomyosin during the formation of a differentiated confluent epithelium. Single‐ to multi‐cell ring/torus‐shaped genomes were observed prior to complex fractal Turing patterns extending from a rotating torus centre in a spiral pattern consistent with a gene morphogen motif. These features may contribute to the well‐described differentiation from stem cells at the crypt base to the luminal colon epithelium along the crypt axis. This observation may be useful to study the role of mechanogenomic processes and the underlying molecular mechanisms as determinants of cellular and tissue architecture in space and time, which is the focal point of the 4D nucleome initiative. Mathematical and bioengineer modelling of gene circuits and cell shapes may provide a powerful algorithm that will contribute to future precision medicine relevant to a number of common medical disorders.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146665/1/jcmm13853.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146665/2/jcmm13853_am.pd

    Reproducible image-based profiling with Pycytominer

    Full text link
    Technological advances in high-throughput microscopy have facilitated the acquisition of cell images at a rapid pace, and data pipelines can now extract and process thousands of image-based features from microscopy images. These features represent valuable single-cell phenotypes that contain information about cell state and biological processes. The use of these features for biological discovery is known as image-based or morphological profiling. However, these raw features need processing before use and image-based profiling lacks scalable and reproducible open-source software. Inconsistent processing across studies makes it difficult to compare datasets and processing steps, further delaying the development of optimal pipelines, methods, and analyses. To address these issues, we present Pycytominer, an open-source software package with a vibrant community that establishes an image-based profiling standard. Pycytominer has a simple, user-friendly Application Programming Interface (API) that implements image-based profiling functions for processing high-dimensional morphological features extracted from microscopy images of cells. Establishing Pycytominer as a standard image-based profiling toolkit ensures consistent data processing pipelines with data provenance, therefore minimizing potential inconsistencies and enabling researchers to confidently derive accurate conclusions and discover novel insights from their data, thus driving progress in our field.Comment: 13 pages, 4 figure

    SOCRAT: A Dynamic Web Toolbox for Interactive Data Processing, Analysis and Visualization

    No full text
    Many systems for exploratory and visual data analytics require platform-dependent software installation, coding skills, and analytical expertise. The rapid advances in data-acquisition, web-based information, and communication and computation technologies promoted the explosive growth of online services and tools implementing novel solutions for interactive data exploration and visualization. However, web-based solutions for visual analytics remain scattered and relatively problem-specific. This leads to per-case re-implementations of common components, system architectures, and user interfaces, rather than focusing on innovation and building sophisticated applications for visual analytics. In this paper, we present the Statistics Online Computational Resource Analytical Toolbox (SOCRAT), a dynamic, flexible, and extensible web-based visual analytics framework. The SOCRAT platform is designed and implemented using multi-level modularity and declarative specifications. This enables easy integration of a number of components for data management, analysis, and visualization. SOCRAT benefits from the diverse landscape of existing in-browser solutions by combining them with flexible template modules into a unique, powerful, and feature-rich visual analytics toolbox. The platform integrates a number of independently developed tools for data import, display, storage, interactive visualization, statistical analysis, and machine learning. Various use cases demonstrate the unique features of SOCRAT for visual and statistical analysis of heterogeneous types of data

    Breast Tumor Cellularity Assessment Using Deep Neural Networks

    No full text
    © 2019 IEEE. Breast cancer is one of the main causes of death worldwide. Histopathological cellularity assessment of residual tumors in post-surgical tissues is used to analyze a tumor's response to a therapy. Correct cellularity assessment increases the chances of getting an appropriate treatment and facilitates the patient's survival. In current clinical practice, tumor cellularity is manually estimated by pathologists; this process is tedious and prone to errors or low agreement rates between assessors. In this work, we evaluated three strong novel Deep Learning-based approaches for automatic assessment of tumor cellularity from post-treated breast surgical specimens stained with hematoxylin and eosin. We validated the proposed methods on the BreastPathQ SPIE challenge dataset that consisted of 2395 image patches selected from whole slide images acquired from 64 patients. Compared to expert pathologist scoring, our best performing method yielded the Cohen's kappa coefficient of 0.69 (vs. 0.42 previously known in literature) and the intra-class correlation coefficient of 0.89 (vs. 0.83). Our results suggest that Deep Learning-based methods have a significant potential to alleviate the burden on pathologists, enhance the diagnostic workflow, and, thereby, facilitate better clinical outcomes in breast cancer treatment

    Breast tumor cellularity assessment using deep neural networks

    No full text
    Abstract Breast cancer is one of the main causes of death worldwide. Histopathological cellularity assessment of residual tumors in post-surgical tissues is used to analyze a tumor’s response to a therapy. Correct cellularity assessment increases the chances of getting an appropriate treatment and facilitates the patient’s survival. In current clinical practice, tumor cellularity is manually estimated by pathologists; this process is tedious and prone to errors or low agreement rates between assessors. In this work, we evaluated three strong novel Deep Learning-based approaches for automatic assessment of tumor cellularity from post-treated breast surgical specimens stained with hematoxylin and eosin. We validated the proposed methods on the BreastPathQ SPIE challenge dataset that consisted of 2395 image patches selected from whole slide images acquired from 64 patients. Compared to expert pathologist scoring, our best performing method yielded the Cohen’s kappa coefficient of 0.69 (vs. 0.42 previously known in literature) and the intra-class correlation coefficient of 0.89 (vs. 0.83). Our results suggest that Deep Learning-based methods have a significant potential to alleviate the burden on pathologists, enhance the diagnostic workflow, and, thereby, facilitate better clinical outcomes in breast cancer treatment
    corecore