27 research outputs found

    A fully automated end-to-end process for fluorescence microscopy images of yeast cells:From segmentation to detection and classification

    Get PDF
    In recent years, an enormous amount of fluorescence microscopy images were collected in high-throughput lab settings. Analyzing and extracting relevant information from all images in a short time is almost impossible. Detecting tiny individual cell compartments is one of many challenges faced by biologists. This paper aims at solving this problem by building an end-to-end process that employs methods from the deep learning field to automatically segment, detect and classify cell compartments of fluorescence microscopy images of yeast cells. With this intention we used Mask R-CNN to automatically segment and label a large amount of yeast cell data, and YOLOv4 to automatically detect and classify individual yeast cell compartments from these images. This fully automated end-to-end process is intended to be integrated into an interactive e-Science server in the PerICo1 project, which can be used by biologists with minimized human effort in training and operation to complete their various classification tasks. In addition, we evaluated the detection and classification performance of state-of-the-art YOLOv4 on data from the NOP1pr-GFP-SWAT yeast-cell data library. Experimental results show that by dividing original images into 4 quadrants YOLOv4 outputs good detection and classification results with an F1-score of 98% in terms of accuracy and speed, which is optimally suited for the native resolution of the microscope and current GPU memory sizes. Although the application domain is optical microscopy in yeast cells, the method is also applicable to multiple-cell images in medical application

    One-vs-One classification for deep neural networks

    Get PDF
    For performing multi-class classification, deep neural networks almost always employ a One-vs-All (OvA) classification scheme with as many output units as there are classes in a dataset. The problem of this approach is that each output unit requires a complex decision boundary to separate examples from one class from all other examples. In this paper, we propose a novel One-vs-One (OvO) classification scheme for deep neural networks that trains each output unit to distinguish between a specific pair of classes. This method increases the number of output units compared to the One-vs-All classification scheme but makes learning correct decision boundaries much easier. In addition to changing the neural network architecture, we changed the loss function, created a code matrix to transform the one-hot encoding to a new label encoding, and changed the method for classifying examples. To analyze the advantages of the proposed method, we compared the One-vs-One and One-vs-All classification methods on three plant recognition datasets (including a novel dataset that we created) and a dataset with images of different monkey species using two deep architectures. The two deep convolutional neural network (CNN) architectures, Inception-V3 and ResNet-50, are trained from scratch or pre-trained weights. The results show that the One-vs-One classification method outperforms the One-vs-All method on all four datasets when training the CNNs from scratch. However, when using the two classification schemes for fine-tuning pre-trained CNNs, the One-vs-All method leads to the best performances, which is presumably because the CNNs had been pre-trained using the One-vs-All scheme

    CentroidNetV2:A hybrid deep neural network for small-object segmentation and counting

    Get PDF
    This paper presents CentroidNetV2, a novel hybrid Convolutional Neural Network (CNN) that has been specifically designed to segment and count many small and connected object instances. This complete redesign of the original CentroidNet uses a CNN backbone to regress a field of centroid-voting vectors and border-voting vectors. The segmentation masks of the individual object instances are produced by decoding centroid votes and border votes. A loss function that combines cross-entropy loss and Euclidean-distance loss achieves high quality centroids and borders of object instances. Several backbones and loss functions are tested on three different datasets ranging from precision agriculture to microbiology and pathology. CentroidNetV2 is compared to the state-of-the art networks You Only Look Once Version 3 (YOLOv3) and Mask Recurrent Convolutional Neural Network (MRCNN). On two out of three datasets CentroidNetV2 achieves the highest F1 score and on all three datasets CentroidNetV2 achieves the highest recall. CentroidNetV2 demonstrates the best ability to detect small objects although the best segmentation masks for larger objects are produced by MRCNN. (c) 2020 Elsevier B.V. All rights reserved

    Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry

    Get PDF
    Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation

    L.R.: Prediction of human eye fixations using symmetry

    No full text
    Humans are very sensitive to symmetry in visual patterns. Reaction time experiments show that symmetry is detected and recognized very rapidly. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of saliency. In this paper, we discuss local symmetry as a measure of saliency. We propose a number of symmetry models and perform an eye-tracking study with human participants viewing photographic images to test the models. The performance of our symmetry models is compared with the contrast-saliency model of Itti, Koch and Niebur (1998). The results show that the symmetry models better match the human data than the contrast model, which indicates that symmetry can be regarded as a salient feature

    Using Symmetrical Regions-of-Interest to Improve Visual SLAM

    No full text
    Simultaneous Localization and Mapping (SLAM) based on visual information is a challenging problem. One of the main problems with visual SLAM is to find good quality landmarks, that can be detected despite noise and small changes in viewpoint. Many approaches use SIFT interest points as visual landmarks. The problem with the SIFT interest points detector, however, is that it results in a large number of points, of which many are not stable across observations. We propose the use of local symmetry to find regions of interest instead. Symmetry is a stimulus that occurs frequently in everyday environments where our robots operate in, making it useful for SLAM. Furthermore, symmetrical forms are inherently redundant, and can therefore be more robustly detected. By using regions instead of points-of-interest, the landmarks are more stable. To test the performance of our model, we recorded a SLAM database with a mobile robot, and annotated the database by manually adding ground-truth positions. The results show that symmetrical regions-of-interest are less susceptible to noise, are more stable, and above all, result in better SLAM performance.© 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.QC 2011111

    The Relation between Pen Force And Pen Point . . .

    No full text
    This study investigates the spectral coherence and time-domain correlation between pen pressure (axial pen force, APF) and several kinematic variables in drawing simple patterns and in writing cursive script. Two types of theories are prevalent: "biomechanical " and "central" explanations for the force variations during writing. Findings show that overall coherence is low ( 0.5) and decreases with pattern complexity, attaining its lowest value in cursive script. Looking at subjects separately, it is found that only in a small minority of writers "biomechanical coupling" between force and displacement takes place in cursive handwriting, as indicated by moderate to high negative overall correlations. The majority of subjects displays low coherence and correlation between kinematics and APF. However, APF patterns in cursive script reveal a moderate to high replicatability, giving support to the notion of a "centrally" controlled pen pressure. The sign of the weak residual average correlation between APF and finger displacement, and between APF and wrist displacement is negative. This indicates that small biomechanical e#ects may be present, a relatively higher APF corresponding to finger flexion and wrist radial abduction. On the whole, however, variance in APF cannot be explained by kinematic variables. A motor task demanding mechanical impedance control, such as handwriting, apparently introduces a complexity that is not easily explained in terms of a passive mass-spring model of skeleto-muscular movement
    corecore