3,750 research outputs found

    Recognizing white blood cells with local image descriptors

    Get PDF
    Automatic and reliable classification of images of white blood cells is desirable for inexpensive, quick and accurate health diagnosis worldwide. In contrast to previous approaches which tend to rely on image segmentation and a careful choice of ad hoc (geometric) features, we explore the possibilities of local image descriptors, since they are a simple approachthey require no explicit segmentation, and yet they have been shown to be quite robust against background distraction in a number of visual tasks. Despite its potential, this methodology remains unexplored for this problem. In this work, images are therefore characterized with the well-known visual bag-of-words approach. Three keypoint detectors and five regular sampling strategies are studied and compared. The results indicate that the approach is encouraging, and that both the sparse keypoint detectors and the dense regular sampling strategies can perform reasonably well (mean accuracies of about 80% are obtained), and are competitive to segmentation-based approaches. Two of the main findings are as follows. First, for sparse points, the detector which localizes keypoints on the cell contour (oFAST) performs somehow better than the other two (SIFT and CenSurE). Second, interestingly, and partly contrary to our expectations, the regular sampling strategies including hierarchical spatial information, multi-resolution encoding, or foveal-like sampling, clearly outperform the two simpler uniform-sampling strategies considered. From the broader perspective of expert and intelligent systems, the relevance of the proposed approach is that, since it is very general and problem-agnostic, it makes unnecesary human expertise to be elicited in the form of explicit visual cues; only the labels of the cell type are required from human domain experts

    Organs on chip approach: A tool to evaluate cancer-immune cells interactions

    Get PDF
    In this paper we discuss the applicability of numerical descriptors and statistical physics concepts to characterize complex biological systems observed at microscopic level through organ on chip approach. To this end, we employ data collected on a micro uidic platform in which leukocytes can move through suitably built channels toward their target. Leukocyte behavior is recorded by standard time lapse imaging. In particular, we analyze three groups of human peripheral blood mononuclear cells (PBMC): heterozygous mutants (in which only one copy of the FPR1 gene is normal), homozygous mutants (in which both alleles encoding FPR1 are loss-of-function variants) and cells from ‘wild type’ donors (with normal expression of FPR1). We characterize the migration of these cells providing a quantitative con rmation of the essential role of FPR1 in cancer chemotherapy response. Indeed wild type PBMC perform biased random walks toward chemotherapy-treated cancer cells establishing persistent interactions with them. Conversely, heterozygous mutants present a weaker bias in their motion and homozygous mutants perform rather uncorrelated random walks, both failing to engage with their targets. We next focus on wild type cells and study the interactions of leukocytes with cancerous cells developing a novel heuristic procedure, inspired by Lyapunov stability in dynamical systems

    Prospects for Theranostics in Neurosurgical Imaging: Empowering Confocal Laser Endomicroscopy Diagnostics via Deep Learning

    Get PDF
    Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence imaging technology that has the potential to increase intraoperative precision, extend resection, and tailor surgery for malignant invasive brain tumors because of its subcellular dimension resolution. Despite its promising diagnostic potential, interpreting the gray tone fluorescence images can be difficult for untrained users. In this review, we provide a detailed description of bioinformatical analysis methodology of CLE images that begins to assist the neurosurgeon and pathologist to rapidly connect on-the-fly intraoperative imaging, pathology, and surgical observation into a conclusionary system within the concept of theranostics. We present an overview and discuss deep learning models for automatic detection of the diagnostic CLE images and discuss various training regimes and ensemble modeling effect on the power of deep learning predictive models. Two major approaches reviewed in this paper include the models that can automatically classify CLE images into diagnostic/nondiagnostic, glioma/nonglioma, tumor/injury/normal categories and models that can localize histological features on the CLE images using weakly supervised methods. We also briefly review advances in the deep learning approaches used for CLE image analysis in other organs. Significant advances in speed and precision of automated diagnostic frame selection would augment the diagnostic potential of CLE, improve operative workflow and integration into brain tumor surgery. Such technology and bioinformatics analytics lend themselves to improved precision, personalization, and theranostics in brain tumor treatment.Comment: See the final version published in Frontiers in Oncology here: https://www.frontiersin.org/articles/10.3389/fonc.2018.00240/ful

    Improved White Blood Cells Classification based on Pre-trained Deep Learning Models

    Get PDF
    Leukocytes, or white blood cells (WBCs), are microscopic organisms that fight against infectious disease, bacteria, viruses, and others. The manual method to classify and count WBCs is tedious, time-consuming and may has inaccurate results, whereas the automated methods are costly. The objective of this work is to automatically identify and classify WBCs in a microscopic image into four types with higher accuracy. BCCD is the used dataset in this study, which is a scaled down blood cell detection dataset. BCCD is firstly pre-processed by passing through several processes such as segmentation and augmentation,then it is passed to the proposed model. Our model combines the privilege of deep models in automatically extracting features with the higher classification accuracy of traditional machine learning classifiers.The proposed model consists of two main layers; a shallow tuning pre-trained model and a traditional machine learning classifier on top of it. Here, ten different pretrained models with six different machine learning are used in this study. Moreover, the fully connected network (FCN) of pretrained models is used as a baseline classifier for comparison. The evaluation process shows that the hybrid between MobileNet-224 as feature extractor with logistic regression as classifier has a higher rank-1 accuracy with 97.03%. Besides, the proposed hybrid model outperformed the baseline FCN with 25.78% on average

    Random subwindows and extremely randomized trees for image classification in cell biology

    Get PDF
    Background: With the improvements in biosensors and high-throughput image acquisition technologies, life science laboratories are able to perform an increasing number of experiments that involve the generation of a large amount of images at different imaging modalities/scales. It stresses the need for computer vision methods that automate image classification tasks. Results: We illustrate the potential of our image classification method in cell biology by evaluating it on four datasets of images related to protein distributions or subcellular localizations, and red-blood cell shapes. Accuracy results are quite good without any specific pre-processing neither domain knowledge incorporation. The method is implemented in Java and available upon request for evaluation and research purpose. Conclusion: Our method is directly applicable to any image classification problems. We foresee the use of this automatic approach as a baseline method and first try on various biological image classification problems

    Towards Developing Computer Vision Algorithms and Architectures for Real-world Applications

    Get PDF
    abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading. To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time. Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists. Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Robust localization and identification of African clawed frogs in digital images

    Get PDF
    We study the automatic localization and identification of African clawed frogs (Xenopus laevis sp.) in digital images taken in a laboratory environment. We propose a novel and stable frog body localization and skin pattern window extraction algorithm. We show that it compensates scale and rotation changes very well. Moreover, it is able to localize and extract highly overlapping regions (pattern windows) even in the cases of intense affine transformations, blurring, Gaussian noise, and intensity transformations. The frog skin pattern (i.e. texture) provides a unique feature for the identification of individual frogs. We investigate the suitability of five different feature descriptors (Gabor filters, area granulometry, HoG,1 dense SIFT,2 and raw pixel values) to represent frog skin patterns. We compare the robustness of the features based on their identification performance using a nearest neighbor classifier. Our experiments show that among five features that we tested, the best performing feature against rotation, scale, and blurring modifications was the raw pixel feature, whereas the SIFT feature was the best performing one against affine and intensity modifications
    corecore