133 research outputs found
Medial Features for Superpixel Segmentation
Image segmentation plays an important role in computer vision and human scene perception. Image oversegmentation is a common technique to overcome the problem of managing the high number of pixels and the reasoning among them. Specifically, a local and coherent cluster that contains a statistically homogeneous region is denoted as a superpixel. In this paper we propose a novel algorithm that segments an image into superpixels employing a new kind of shape centered feature which serve as a seed points for image segmentation, based on Gradient Vector Flow fields (GVF) [14]. The features are located at image locations with salient symmetry. We compare our algorithm to state-of-the-art superpixel algorithms and demonstrate a performance increase on the standard Berkeley Segmentation Dataset
A Framework for Symmetric Part Detection in Cluttered Scenes
The role of symmetry in computer vision has waxed and waned in importance
during the evolution of the field from its earliest days. At first figuring
prominently in support of bottom-up indexing, it fell out of favor as shape
gave way to appearance and recognition gave way to detection. With a strong
prior in the form of a target object, the role of the weaker priors offered by
perceptual grouping was greatly diminished. However, as the field returns to
the problem of recognition from a large database, the bottom-up recovery of the
parts that make up the objects in a cluttered scene is critical for their
recognition. The medial axis community has long exploited the ubiquitous
regularity of symmetry as a basis for the decomposition of a closed contour
into medial parts. However, today's recognition systems are faced with
cluttered scenes, and the assumption that a closed contour exists, i.e. that
figure-ground segmentation has been solved, renders much of the medial axis
community's work inapplicable. In this article, we review a computational
framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009,
2013), that bridges the representation power of the medial axis and the need to
recover and group an object's parts in a cluttered scene. Our framework is
rooted in the idea that a maximally inscribed disc, the building block of a
medial axis, can be modeled as a compact superpixel in the image. We evaluate
the method on images of cluttered scenes.Comment: 10 pages, 8 figure
Adaptive Segmentation of Knee Radiographs for Selecting the Optimal ROI in Texture Analysis
The purposes of this study were to investigate: 1) the effect of placement of
region-of-interest (ROI) for texture analysis of subchondral bone in knee
radiographs, and 2) the ability of several texture descriptors to distinguish
between the knees with and without radiographic osteoarthritis (OA). Bilateral
posterior-anterior knee radiographs were analyzed from the baseline of OAI and
MOST datasets. A fully automatic method to locate the most informative region
from subchondral bone using adaptive segmentation was developed. We used an
oversegmentation strategy for partitioning knee images into the compact regions
that follow natural texture boundaries. LBP, Fractal Dimension (FD), Haralick
features, Shannon entropy, and HOG methods were computed within the standard
ROI and within the proposed adaptive ROIs. Subsequently, we built logistic
regression models to identify and compare the performances of each texture
descriptor and each ROI placement method using 5-fold cross validation setting.
Importantly, we also investigated the generalizability of our approach by
training the models on OAI and testing them on MOST dataset.We used area under
the receiver operating characteristic (ROC) curve (AUC) and average precision
(AP) obtained from the precision-recall (PR) curve to compare the results. We
found that the adaptive ROI improves the classification performance (OA vs.
non-OA) over the commonly used standard ROI (up to 9% percent increase in AUC).
We also observed that, from all texture parameters, LBP yielded the best
performance in all settings with the best AUC of 0.840 [0.825, 0.852] and
associated AP of 0.804 [0.786, 0.820]. Compared to the current state-of-the-art
approaches, our results suggest that the proposed adaptive ROI approach in
texture analysis of subchondral bone can increase the diagnostic performance
for detecting the presence of radiographic OA
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Deep learning for digitized histology image analysis
“Cervical cancer is the fourth most frequent cancer that affects women worldwide. Assessment of cervical intraepithelial neoplasia (CIN) through histopathology remains as the standard for absolute determination of cancer. The examination of tissue samples under a microscope requires considerable time and effort from expert pathologists. There is a need to design an automated tool to assist pathologists for digitized histology slide analysis. Pre-cervical cancer is generally determined by examining the CIN which is the growth of atypical cells from the basement membrane (bottom) to the top of the epithelium. It has four grades, including: Normal, CIN1, CIN2, and CIN3. In this research, different facets of an automated digitized histology epithelium assessment pipeline have been explored to mimic the pathologist diagnostic approach. The entire pipeline from slide to epithelium CIN grade has been designed and developed using deep learning models and imaging techniques to analyze the whole slide image (WSI). The process is as follows: 1) identification of epithelium by filtering the regions extracted from a low-resolution image with a binary classifier network; 2) epithelium segmentation; 3) deep regression for pixel-wise segmentation of epithelium by patch-based image analysis; 4) attention-based CIN classification with localized sequential feature modeling. Deep learning-based nuclei detection by superpixels was performed as an extension of our research. Results from this research indicate an improved performance of CIN assessment over state-of-the-art methods for nuclei segmentation, epithelium segmentation, and CIN classification, as well as the development of a prototype WSI-level tool”--Abstract, page iv
Deep learning enables spatial mapping of the mosaic microenvironment of myeloma bone marrow trephine biopsies
Bone marrow trephine biopsy is crucial for the diagnosis of multiple myeloma. However, the complexity of bone marrow cellular, morphological, and spatial architecture preserved in trephine samples hinders comprehensive evaluation. To dissect the diverse cellular communities and mosaic tissue habitats, we developed a superpixel-inspired deep learning method (MoSaicNet) that adapts to complex tissue architectures and a cell imbalance aware deep learning pipeline (AwareNet) to enable accurate detection and classification of rare cell types in multiplex immunohistochemistry images. MoSaicNet and AwareNet achieved an area under the curve of >0.98 for tissue and cellular classification on separate test datasets. Application of MoSaicNet and AwareNet enabled investigation of bone heterogeneity and thickness as well as spatial histology analysis of bone marrow trephine samples from monoclonal gammopathies of undetermined significance (MGUS) and from paired newly diagnosed and post-treatment multiple myeloma. The most significant difference between MGUS and newly diagnosed multiple myeloma (NDMM) samples was not related to cell density but to spatial heterogeneity, with reduced spatial proximity of BLIMP1+ tumor cells to CD8+ cells in MGUS compared with NDMM samples. Following treatment of multiple myeloma patients, there was a reduction in the density of BLIMP1+ tumor cells, effector CD8+ T cells, and T regulatory cells, indicative of an altered immune microenvironment. Finally, bone heterogeneity decreased following treatment of MM patients. In summary, deep-learning based spatial mapping of bone marrow trephine biopsies can provide insights into the cellular topography of the myeloma marrow microenvironment and complement aspirate-based techniques
- …