73 research outputs found

    Conference of Advance Research and Innovation (ICARI-2014) 118 ICARI

    Get PDF
    Abstract With the advent of highly advanced optics and imaging system, currently biological research has reached a stage where scientists can study biological entities and processes at molecular and cellular-level in real time. However, a single experiment consists of hundreds and thousands of parameters to be recorded and a large population of microscopic objects to be tracked. Thus, making manual inspection of such events practically impossible. This calls for an approach to computer-vision based automated tracking and monitoring of cells in biological experiments. This technology promises to revolutionize the research in cellular biology and medical science which includes discovery of diseases by tracking the process in cells, development of therapy and drugs and the study of microscopic biological elements. This article surveys the recent literature in the area of computer vision based automated cell tracking. It discusses the latest trends and successes in the development and introduction of automated cell tracking techniques and systems

    Biological cell tracking and lineage inference via random finite sets

    Get PDF
    Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells

    A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction

    Get PDF
    Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation—including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6(th) edition of the Cell Tracking Challenge

    Multiple Object Tracking in Light Microscopy Images Using Graph-based and Deep Learning Methods

    Get PDF
    Multi-Objekt-Tracking (MOT) ist ein Problem der Bildanalyse, welches die Lokalisierung und Verknüpfung von Objekten in einer Bildsequenz über die Zeit umfasst, mit zahlreichen Anwendungen in Bereichen wie autonomes Fahren, Robotik oder Überwachung. Neben technischen Anwendungsgebieten besteht auch ein großer Bedarf an MOT in biomedizinischen Anwendungen. So können beispielsweise Experimente, die mittels Lichtmikroskopie über mehrere Stunden oder Tage hinweg erfasst wurden, Hunderte oder sogar Tausende von ähnlich aussehenden Objekten enthalten, was eine manuelle Analyse unmöglich macht. Um jedoch zuverlässige Schlussfolgerungen aus den verfolgten Objekten abzuleiten, ist eine hohe Qualität der prädizierten Trajektorien erforderlich. Daher werden domänenspezifische MOT-Ansätze benötigt, die in der Lage sind, die Besonderheiten von lichtmikroskopischen Daten zu berücksichtigen. In dieser Arbeit werden daher zwei neuartige Methoden für das MOT-Problem in Lichtmikroskopie-Bildern erarbeitet sowie Ansätze zum Vergleich der Tracking-Methoden vorgestellt. Um die Performanz der Tracking-Methode von der Qualität der Segmentierung zu unterscheiden, wird ein Ansatz vorgeschlagen, der es ermöglicht die Tracking-Methode getrennt von der Segmentierung zu analysieren, was auch eine Untersuchung der Robustheit von Tracking-Methoden gegeben verschlechterter Segmentierungsdaten erlaubt. Des Weiteren wird eine graphbasierte Tracking-Methode vorgeschlagen, welche eine Brücke zwischen einfach anzuwendenden, aber weniger performanten Tracking-Methoden und performanten Tracking-Methoden mit vielen schwer einstellbaren Parametern schlägt. Die vorgeschlagene Tracking-Methode hat nur wenige manuell einstellbare Parameter und ist einfach auf 2D- und 3D-Datensätze anwendbar. Durch die Modellierung von Vorwissen über die Form des Tracking-Graphen ist die vorgeschlagene Tracking-Methode außerdem in der Lage, bestimmte Arten von Segmentierungsfehlern automatisch zu korrigieren. Darüber hinaus wird ein auf Deep Learning basierender Ansatz vorgeschlagen, der die Aufgabe der Instanzsegmentierung und Objektverfolgung gleichzeitig in einem einzigen neuronalen Netzwerk erlernt. Außerdem lernt der vorgeschlagene Ansatz Repräsentationen zu prädizieren, die für den Menschen verständlich sind. Um die Performanz der beiden vorgeschlagenen Tracking-Methoden im Vergleich zu anderen aktuellen, domänenspezifischen Tracking-Ansätzen zu zeigen, werden sie auf einen domänenspezifischen Benchmark angewendet. Darüber hinaus werden weitere Bewertungskriterien für Tracking-Methoden eingeführt, welche zum Vergleich der beiden vorgeschlagenen Tracking-Methoden herangezogen werden

    Deep Learning for Detection and Segmentation in High-Content Microscopy Images

    Get PDF
    High-content microscopy led to many advances in biology and medicine. This fast emerging technology is transforming cell biology into a big data driven science. Computer vision methods are used to automate the analysis of microscopy image data. In recent years, deep learning became popular and had major success in computer vision. Most of the available methods are developed to process natural images. Compared to natural images, microscopy images pose domain specific challenges such as small training datasets, clustered objects, and class imbalance. In this thesis, new deep learning methods for object detection and cell segmentation in microscopy images are introduced. For particle detection in fluorescence microscopy images, a deep learning method based on a domain-adapted Deconvolution Network is presented. In addition, a method for mitotic cell detection in heterogeneous histopathology images is proposed, which combines a deep residual network with Hough voting. The method is used for grading of whole-slide histology images of breast carcinoma. Moreover, a method for both particle detection and cell detection based on object centroids is introduced, which is trainable end-to-end. It comprises a novel Centroid Proposal Network, a layer for ensembling detection hypotheses over image scales and anchors, an anchor regularization scheme which favours prior anchors over regressed locations, and an improved algorithm for Non-Maximum Suppression. Furthermore, a novel loss function based on Normalized Mutual Information is proposed which can cope with strong class imbalance and is derived within a Bayesian framework. For cell segmentation, a deep neural network with increased receptive field to capture rich semantic information is introduced. Moreover, a deep neural network which combines both paradigms of multi-scale feature aggregation of Convolutional Neural Networks and iterative refinement of Recurrent Neural Networks is proposed. To increase the robustness of the training and improve segmentation, a novel focal loss function is presented. In addition, a framework for black-box hyperparameter optimization for biomedical image analysis pipelines is proposed. The framework has a modular architecture that separates hyperparameter sampling and hyperparameter optimization. A visualization of the loss function based on infimum projections is suggested to obtain further insights into the optimization problem. Also, a transfer learning approach is presented, which uses only one color channel for pre-training and performs fine-tuning on more color channels. Furthermore, an approach for unsupervised domain adaptation for histopathological slides is presented. Finally, Galaxy Image Analysis is presented, a platform for web-based microscopy image analysis. Galaxy Image Analysis workflows for cell segmentation in cell cultures, particle detection in mice brain tissue, and MALDI/H&E image registration have been developed. The proposed methods were applied to challenging synthetic as well as real microscopy image data from various microscopy modalities. It turned out that the proposed methods yield state-of-the-art or improved results. The methods were benchmarked in international image analysis challenges and used in various cooperation projects with biomedical researchers

    Computer vision for sequential non-invasive microscopy imaging cytometry with applications in embryology

    Get PDF
    Many in vitro cytometric methods requires the sample to be destroyed in the process. Using image analysis of non-invasive microscopy techniques it is possible to monitor samples undisturbed in their natural environment, providing new insights into cell development, morphology and health. As the effect on the sample is minimized, imaging can be sustained for long un-interrupted periods of time, making it possible to study temporal events as well as individual cells over time. These methods are applicable in a number of fields, and are of particular importance in embryological studies, where no sample interference is acceptable. Using long term image capture and digital image cytometry of growing embryos it is possible to perform morphokinetic screening, automated analysis and annotation using proper software tools. By literature reference, one such framework is suggested and the required methods are developed and evaluated. Results are shown in tracking embryos, embryo cell segmentation, analysis of internal cell structures and profiling of cell growth and activity. Two related extensions of the framework into three dimensional embryo analysis and adherent cell monitoring are described

    BIOLOGICAL CONSEQUENCES OF CHROMATIN LOOPING IN PERICENTRIC CHROMATIN

    Get PDF
    During mitosis, replicated sister chromatids are attached to opposite sides of a microtubule spindle at their centromeres in a process called biorientation. The proteinaceous structure that links centromeres, a region of the chromosome, to the spindle is called the kinetochore. Tension within kinetochores of bioriented chromosomes is thought to be crucial for accurate chromosome segregation. The simplest method of generating tension within kinetochores in bioriented chromosomes would be sister chromatid cohesion at the centromere. However, across phylogeny, sister centromeres are separated by 800-1000 nm. Using Saccharomyces cerevisiae, we explore how pericentric chromatin, the 20-50 kb region surrounding the centromere, is organized to allow for tension to be generated and regulated at the centromere during mitosis. Pericentric chromatin is enriched in the ring-like protein complexes condensin and cohesin. We find that the pericentromeric region contains several of chromatin loops formed by condensin and cross-linked by cohesin. Simulations of chromatin loops recapitulate the experimental observation that fluorescently labeled regions within pericentric chromatin to appear as compact foci radially displaced from, i.e. above or below, the sister kinetochore. Live-cell imaging experiments with a dicentric plasmid, a circular double stranded DNA molecule that can biorient without replication due to the presence of two centromeres, illustrated the mitotic spindle has sufficient force to extend chromatin during metaphase. Simulations revealed that chromatin loops isolate tension to a geometric subset of chromatin that is directly in between, not above or below, sister kinetochores. Thus, the majority of pericentric chromatin, which is contained in compact loops, is under reduced tension. Additionally, chromatin loops explain the distributions of pericentric cohesin and condensin. Cohesin’s radial, barrel-like distribution is due to its ability to diffuse to the radial tips of the loops. Condensin’s ability to form chromatin loops requires condensin to bind to the high-tension chromatin on either side of the low-tension loop, forcing condensin to colocalize with the axial, high-tension chromatin. Chromatin loops recapitulate experimental observations of pericentric chromatin and provide an elegant mechanism for tension modulation at the centromere during mitosis.Doctor of Philosoph

    Cellular forces : adhering, shaping, sensing and dividing

    Get PDF
    Life’s building block is a cell. Different cell types are differentiated by specific functional properties. A white blood cell, for instance, can get rid of bacteria and many muscle cells contract together for proper muscle function. Deformation and force exertion play important roles in these processes. Bacteria have to be physically engulfed by the white blood cell, and the muscle cell has to contract in the right way. In this research we measured how much force cells exert and simultaneously visualized specific proteins. A newly developed technique enabled the visualization of the nanometer-structure of cellular adhesions. We also examined the relationship between cellular shape and orientation of an intracellular network of protein (actin). We discovered that the signal of yet another protein (p130Cas) alters the mechanical behavior of the cell when the stiffness outside the cell changes. Finally, we also examined the structure of other proteins (tubulin and H2B) during cell division. In all these processes we measured how much force a cell exerts on its environment. The results provide important insights in the mechanical component of cellular function and their role in lifeBiological and Soft Matter Physic

    Image Analysis for the Life Sciences - Computer-assisted Tumor Diagnostics and Digital Embryomics

    Get PDF
    Current research in the life sciences involves the analysis of such a huge amount of image data that automatization is required. This thesis presents several ways how pattern recognition techniques may contribute to improved tumor diagnostics and to the elucidation of vertebrate embryonic development. Chapter 1 studies an approach for exploiting spatial context for the improved estimation of metabolite concentrations from magnetic resonance spectroscopy imaging (MRSI) data with the aim of more robust tumor detection, and compares against a novel alternative. Chapter 2 describes a software library for training, testing and validating classification algorithms that estimate tumor probability based on MRSI. It allows flexible adaptation towards changed experimental conditions, classifier comparison and quality control without need for expertise in pattern recognition. Chapter 3 studies several models for learning tumor classifiers that allow for the common unreliability of human segmentations. For the first time, models are used for this task that additionally employ the objective image information. Chapter 4 encompasses two contributions to an image analysis pipeline for automatically reconstructing zebrafish embryonic development based on time-resolved microscopy: Two approaches for nucleus segmentation are experimentally compared, and a procedure for tracking nuclei over time is presented and evaluated

    Learning Object Recognition and Object Class Segmentation with Deep Neural Networks on GPU

    Get PDF
    As cameras are becoming ubiquitous and internet storage abundant, the need for computers to understand images is growing rapidly. This thesis is concerned with two computer vision tasks, recognizing objects and their location, and segmenting images according to object classes. We focus on deep learning approaches, which in recent years had a tremendous influence on machine learning in general and computer vision in particular. The thesis presents our research into deep learning models and algorithms. It is divided into three parts. The first part describes our GPU deep learning framework. Its hierarchical structure allows transparent use of GPU, facilitates specification of complex models, model inspection, and constitutes the implementation basis of the later chapters. Components of this framework were used in a real-time GPU library for random forests, which we present and evaluate. In the second part, we investigate greedy learning techniques for semi-supervised object recognition. We improve the feature learning capabilities of restricted Boltzmann machines (RBM) with lateral interactions and auto-encoders with additional hidden layers, and offer empirical insight into the evaluation of RBM learning algorithms. The third part of this thesis focuses on object class segmentation. Here, we incrementally introduce novel neural network models and training algorithms, successively improving the state of the art on multiple datasets. Our novel methods include supervised pre-training, histogram of oriented gradient DNN inputs, depth normalization and recurrence. All contribute towards improving segmentation performance beyond what is possible with competitive baseline methods. We further demonstrate that pixelwise labeling combined with a structured loss function can be utilized to localize objects. Finally, we show how transfer learning in combination with object-centered depth colorization can be used to identify objects. We evaluate our proposed methods on the publicly available MNIST, MSRC, INRIA Graz-02, NYU-Depth, Pascal VOC, and Washington RGB-D Objects datasets.Allgegenwärtige Kameras und preiswerter Internetspeicher erzeugen einen großen Bedarf an Algorithmen für maschinelles Sehen. Die vorliegende Dissertation adressiert zwei Teilbereiche dieses Forschungsfeldes: Erkennung von Objekten und Objektklassensegmentierung. Der methodische Schwerpunkt liegt auf dem Lernen von tiefen Modellen (”Deep Learning“). Diese haben in den vergangenen Jahren einen enormen Einfluss auf maschinelles Lernen allgemein und speziell maschinelles Sehen gewonnen. Dabei behandeln wir behandeln wir drei Themenfelder. Der erste Teil der Arbeit beschreibt ein GPU-basiertes Softwaresystem für Deep Learning. Dessen hierarchische Struktur erlaubt schnelle GPU-Berechnungen, einfache Spezifikation komplexer Modelle und interaktive Modellanalyse. Damit liefert es das Fundament für die folgenden Kapitel. Teile des Systems finden Verwendung in einer Echtzeit-GPU-Bibliothek für Random Forests, die wir ebenfalls vorstellen und evaluieren. Der zweite Teil der Arbeit beleuchtet Greedy-Lernalgorithmen für halb überwachtes Lernen. Hier werden hierarchische Modelle schrittweise aus Modulen wie Autokodierern oder restricted Boltzmann Machines (RBM ) aufgebaut. Wir verbessern die Repräsentationsfähigkeiten von RBM auf Bildern durch Einführung lokaler und lateraler Verknüpfungen und liefern empirische Erkenntnisse zur Bewertung von RBM-Lernalgorithmen. Wir zeigen zudem, dass die in Autokodierern verwendeten einschichtigen Kodierer komplexe Zusammenhänge ihrer Eingaben nicht erkennen können und schlagen stattdessen einen hybriden Kodierer vor, der sowohl komplexe Zusammenhänge erkennen, als auch weiterhin einfache Zusammenhänge einfach repräsentieren kann. Im dritten Teil der Arbeit stellen wir neue neuronale Netzarchitekturen und Trainingsmethoden für die Objektklassensegmentierung vor. Wir zeigen, dass neuronale Netze mit überwachtem Vortrainieren, wiederverwendeten Ausgaben und Histogrammen Orientierter Gradienten (HOG) als Eingabe den aktuellen Stand der Technik auf mehreren RGB-Datenmengen erreichen können. Anschließend erweitern wir unsere Methoden in zwei Dimensionen, sodass sie mit Tiefendaten (RGB-D) und Videos verarbeiten können. Dazu führen wir zunächst Tiefennormalisierung für Objektklassensegmentierung ein um die Skala zu fixieren, und erlauben expliziten Zugriff auf die Höhe in einem Bildausschnitt. Schließlich stellen wir ein rekurrentes konvolutionales neuronales Netz vor, das einen großen räumlichen Kontext einbezieht, hochaufgelöste Ausgaben produziert und Videosequenzen verarbeiten kann. Dadurch verbessert sich die Bildsegmentierung relativ zu vergleichbaren Methoden, etwa auf der Basis von Random Forests oder CRF . Wir zeigen dann, dass pixelbasierte Ausgaben in neuronalen Netzen auch benutzt werden können um die Position von Objekten zu detektieren. Dazu kombinieren wir Techniken des strukturierten Lernens mit Konvolutionsnetzen. Schließlich schlagen wir eine objektzentrierte Einfärbungsmethode vor, die es ermöglicht auf RGB-Bildern trainierte neuronale Netze auf RGB-D-Bildern einzusetzen. Dieser Transferlernansatz erlaubt es uns auch mit stark reduzierten Trainingsmengen noch bessere Ergebnisse beim Schätzen von Objektklassen, -instanzen und -orientierungen zu erzielen. Wir werten die von uns vorgeschlagenen Methoden auf den öffentlich zugänglichen MNIST, MSRC, INRIA Graz-02, NYU-Depth, Pascal VOC, und Washington RGB-D Objects Datenmengen aus
    • …
    corecore