71 research outputs found

    Detecting cells and analyzing their behaviors in microscopy images using deep neural networks

    Get PDF
    The computer-aided analysis in the medical imaging field has attracted a lot of attention for the past decade. The goal of computer-vision based medical image analysis is to provide automated tools to relieve the burden of human experts such as radiologists and physicians. More specifically, these computer-aided methods are to help identify, classify and quantify patterns in medical images. Recent advances in machine learning, more specifically, in the way of deep learning, have made a big leap to boost the performance of various medical applications. The fundamental core of these advances is exploiting hierarchical feature representations by various deep learning models, instead of handcrafted features based on domain-specific knowledge. In the work presented in this dissertation, we are particularly interested in exploring the power of deep neural network in the Circulating Tumor Cells detection and mitosis event detection. We will introduce the Convolutional Neural Networks and the designed training methodology for Circulating Tumor Cells detection, a Hierarchical Convolutional Neural Networks model and a Two-Stream Bidirectional Long Short-Term Memory model for mitosis event detection and its stage localization in phase-contrast microscopy images”--Abstract, page iii

    Ethyl methanesulfonate mutant library construction in Neopyropia yezoensis to provide germplasm resources for next-generation genome-selection breeding

    Get PDF
    With the development of the laver industry, germplasm depression has become a serious issue, and current cultivars cannot adapt to different aquaculture regions. In order to increasing the genetic diversity and developing more germplasm sources, it is urgent and reasonable to construct a mutant library with more new germplasms. In this research, a mutant library was constructed by ethyl methanesulfonate (EMS)-mutagenized archeospores, and the most optimal treatment procedure was determined by performing different mutagen concentrations (2.25%) and treatment times (30 min). A total of 1860 haploid thalli were produced as the M1 mutant population and further cultured into conchocelis clones for the reservation of germplasm resources. Among these, 667 individual thalli were evaluated for their phenotypic traits, including thallus length, thallus width, length/width, thallus shape, photosynthesis ability, thallus color, thallus margin, and specific growth speed. The mutation frequency of the length/width ratio was 17.39%, Fv/Fm and NPQ were 21.84% and 29.35%, respectively, and SGR was 13.59%. The mutation frequency of thallus color was 0.91%. This work may not only provide a basic practical reference guide for EMS-based mutant library construction for other seaweeds but, more importantly, also serve as a valuable resource for functional genomics research and laver breeding

    A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images

    No full text
    We propose a Hierarchical Convolution Neural Network (HCNN) for mitosis event detection in time-lapse phase contrast microscopy. Our method contains two stages: first,we extract candidate spatial-temporal patch sequences in the input image sequences which potentially contain mitosis events. Then,we identify if each patch sequence contains mitosis event or not using a hieratical convolutional neural network. In the experiments,we validate the design of our proposed architecture and evaluate the mitosis event detection performance. Our method achieves 99.1% precision and 97.2% recall in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells and outperforms other state-of-the-art methods. Furthermore,the proposed method does not depend on hand-crafted feature design or cell tracking. It can be straightforwardly adapted to event detection of other different cell types

    Training a Scene-Specific Pedestrian Detector using Tracklets

    No full text
    A generic pedestrian detector trained from generic datasets cannot solve all the varieties in different scenarios, thus its performance may not be as good as a scene-specific detector. In this paper, we propose a new approach to automatically train scene-specific pedestrian detectors based on track lets (chains of tracked samples). First, a generic pedestrian detector is applied on the specific scene, which also generates many false positives and miss detections, second, we consider multi-pedestrian tracking as a data association problem and link detected samples into track lets, third, track let features are extracted to label track lets into positive, negative and uncertain ones, and uncertain track lets are further labeled by comparing them with the positive and negative pools. By using track lets, we extract more reliable features than individual samples, and those informative uncertain samples around the classification boundaries are well labeled by label propagation within individual track lets and among different track lets. The labeled samples in the specific scene are combined with generic datasets to train scene-specific detectors. We test the proposed approach on three datasets. Our approach outperforms the state-of-the-art scene-specific detector and shows the effectiveness to adapt to specific scenes without human annotations

    Who Missed the Class? -- Unifying Multi-Face Detection, Tracking and Recognition in Videos

    No full text
    We investigate the problem of checking class attendance by detecting, tracking and recognizing multiple student faces in classroom videos taken by instructors. Instead of recognizing each individual face independently, first, we perform multi-object tracking to associate detected faces (including false positives) into face tracklets (each tracklet contains multiple instances of the same individual with variations in pose, illumination etc.) and then we cluster the face instances in each tracklet into a small number of clusters, achieving sparse face representation with less redundancy. Then, we formulate a unified optimization problem to (a) identify false positive face tracklets; (b) link broken face tracklets belonging to the same person due to long occlusion; and (c) recognize the group of faces simultaneously with spatial and temporal context constraints in the video. We test the proposed method on Honda/UCSD database and real classroom scenarios. The high recognition performance achieved by recognizing a group of multi-instance tracklets simultaneously demonstrates that multi-face recognition is more accurate than recognizing each individual face independently

    Cell Mitosis Event Analysis in Phase Contrast Microscopy Images using Deep Learning

    No full text
    In this paper, we solve the problem of mitosis event localization and its stage localization in time-lapse phase-contrast microscopy images. Our method contains three steps: first, we formulate a Low-Rank Matrix Recovery (LRMR) model to find salient regions from microscopy images and extract candidate patch sequences, which potentially contain mitosis events; second, we classify each candidate patch sequence by our proposed Hierarchical Convolution Neural Network (HCNN) with visual appearance and motion cues; third, for the detected mitosis sequences, we further segment them into four temporal stages by our proposed Two-stream Bidirectional Long-Short Term Memory (TS-BLSTM). In the experiments, we validate our system (LRMR, HCNN, and TS-BLSTM) and evaluate the mitosis event localization and stage localization performance. The proposed method outperforms state-of-the-arts by achieving 99.2% precision and 98.0% recall for mitosis event localization and 0.62 frame error on average for mitosis stage localization in five challenging image sequences

    Iteratively Training Classifiers for Circulating Tumor Cell Detection

    No full text
    The number of Circulating Tumor Cells (CTCs) in blood provides an indication of disease progression and tumor response to chemotherapeutic agents. Hence, routine detection and enumeration of CTCs in clinical blood samples have significant applications in early cancer diagnosis and treatment monitoring. In this paper, we investigate two classifiers for image-based CTC detection: (1) Support Vector Machine (SVM) with hard-coded Histograms of Oriented Gradients (HoG) features; and (2) Convolutional Neural Network (CNN) with automatically learned features. For both classifiers, we present an effective and efficient training algorithm, by which the most representative negative samples are iteratively collected to accurately define the classification boundary between positive and negative samples. The two iteratively trained classifiers are validated on a challenging dataset with high performance

    A Deep Convolutional Neural Network Trained on Representative Samples for Circulating Tumor Cell Detection

    No full text
    The number of Circulating Tumor Cells (CTCs) in blood indicates the tumor response to chemotherapeutic agents and disease progression. In early cancer diagnosis and treatment monitoring routine, detection and enumeration of CTCs in clinical blood samples have significant applications. In this paper, we design a Deep Convolutional Neural Network (DCNN) with automatically learned features for image-based CTC detection. We also present an effective training methodology which finds the most representative training samples to define the classification boundary between positive and negative samples. In the experiment, we compare the performance of auto-learned feature from DCNN and hand-crafted features, in which the DCNN outperforms hand-crafted feature. We also prove that the proposed training methodology is effective in improving the performance of DCNN classifiers
    • …
    corecore