282 research outputs found
Multiclass Weighted Loss for Instance Segmentation of Cluttered Cells
We propose a new multiclass weighted loss function for instance segmentation
of cluttered cells. We are primarily motivated by the need of developmental
biologists to quantify and model the behavior of blood T-cells which might help
us in understanding their regulation mechanisms and ultimately help researchers
in their quest for developing an effective immuno-therapy cancer treatment.
Segmenting individual touching cells in cluttered regions is challenging as the
feature distribution on shared borders and cell foreground are similar thus
difficulting discriminating pixels into proper classes. We present two novel
weight maps applied to the weighted cross entropy loss function which take into
account both class imbalance and cell geometry. Binary ground truth training
data is augmented so the learning model can handle not only foreground and
background but also a third touching class. This framework allows training
using U-Net. Experiments with our formulations have shown superior results when
compared to other similar schemes, outperforming binary class models with
significant improvement of boundary adequacy and instance detection. We
validate our results on manually annotated microscope images of T-cells.Comment: Submitted to IEEE ICIP 201
Bioimage informatics in the context of drosophila research
Modern biological research relies heavily on microscopic imaging. The advanced genetic toolkit of drosophila makes it possible to label molecular and cellular components with unprecedented level of specificity necessitating the application of the most sophisticated imaging technologies. Imaging in drosophila spans all scales from single molecules to the entire populations of adult organisms, from electron microscopy to live imaging of developmental processes. As the imaging approaches become more complex and ambitious, there is an increasing need for quantitative, computer-mediated image processing and analysis to make sense of the imagery. Bioimage informatics is an emerging research field that covers all aspects of biological image analysis from data handling, through processing, to quantitative measurements, analysis and data presentation. Some of the most advanced, large scale projects, combining cutting edge imaging with complex bioimage informatics pipelines, are realized in the drosophila research community. In this review, we discuss the current research in biological image analysis specifically relevant to the type of systems level image datasets that are uniquely available for the drosophila model system. We focus on how state-of-the-art computer vision algorithms are impacting the ability of drosophila researchers to analyze biological systems in space and time. We pay particular attention to how these algorithmic advances from computer science are made usable to practicing biologists through open source platforms and how biologists can themselves participate in their further development
Statistical properties of 3D cell geometry from 2D slices
Although cell shape can reflect the mechanical and biochemical properties of
the cell and its environment, quantification of 3D cell shapes within 3D
tissues remains difficult, typically requiring digital reconstruction from a
stack of 2D images. We investigate a simple alternative technique to extract
information about the 3D shapes of cells in a tissue; this technique connects
the ensemble of 3D shapes in the tissue with the distribution of 2D shapes
observed in independent 2D slices. Using cell vertex model geometries, we find
that the distribution of 2D shapes allows clear determination of the mean value
of a 3D shape index. We analyze the errors that may arise in practice in the
estimation of the mean 3D shape index from 2D imagery and find that typically
only a few dozen cells in 2D imagery are required to reduce uncertainty below
2\%. This framework could be naturally extended to estimate additional 3D
geometric features and quantify their uncertainty in other materials
Random Ferns for Semantic Segmentation of PolSAR Images
Random Ferns -- as a less known example of Ensemble Learning -- have been
successfully applied in many Computer Vision applications ranging from keypoint
matching to object detection. This paper extends the Random Fern framework to
the semantic segmentation of polarimetric synthetic aperture radar images. By
using internal projections that are defined over the space of Hermitian
matrices, the proposed classifier can be directly applied to the polarimetric
covariance matrices without the need to explicitly compute predefined image
features. Furthermore, two distinct optimization strategies are proposed: The
first based on pre-selection and grouping of internal binary features before
the creation of the classifier; and the second based on iteratively improving
the properties of a given Random Fern. Both strategies are able to boost the
performance by filtering features that are either redundant or have a low
information content and by grouping correlated features to best fulfill the
independence assumptions made by the Random Fern classifier. Experiments show
that results can be achieved that are similar to a more complex Random Forest
model and competitive to a deep learning baseline.Comment: This is the author's version of the article as accepted for
publication in IEEE Transactions on Geoscience and Remote Sensing, 2021. Link
to original: https://ieeexplore.ieee.org/document/962798
Vision-based retargeting for endoscopic navigation
Endoscopy is a standard procedure for visualising the human gastrointestinal tract. With the advances in biophotonics, imaging techniques such as narrow band imaging, confocal laser endomicroscopy, and optical coherence tomography can be combined with normal endoscopy for assisting the early diagnosis of diseases, such as cancer. In the past decade, optical biopsy has emerged to be an effective tool for tissue analysis, allowing in vivo and in situ assessment of pathological sites with real-time feature-enhanced microscopic images. However, the non-invasive nature of optical biopsy leads to an intra-examination retargeting problem, which is associated with the difficulty of re-localising a biopsied site consistently throughout the whole examination. In addition to intra-examination retargeting, retargeting of a pathological site is even more challenging across examinations, due to tissue deformation and changing tissue morphologies and appearances. The purpose of this thesis is to address both the intra- and inter-examination retargeting problems associated with optical biopsy. We propose a novel vision-based framework for intra-examination retargeting. The proposed framework is based on combining visual tracking and detection with online learning of the appearance of the biopsied site. Furthermore, a novel cascaded detection approach based on random forests and structured support vector machines is developed to achieve efficient retargeting. To cater for reliable inter-examination retargeting, the solution provided in this thesis is achieved by solving an image retrieval problem, for which an online scene association approach is proposed to summarise an endoscopic video collected in the first examination into distinctive scenes. A hashing-based approach is then used to learn the intrinsic representations of these scenes, such that retargeting can be achieved in subsequent examinations by retrieving the relevant images using the learnt representations. For performance evaluation of the proposed frameworks, extensive phantom, ex vivo and in vivo experiments have been conducted, with results demonstrating the robustness and potential clinical values of the methods proposed.Open Acces
- …