1,754 research outputs found
Multi texture analysis of colorectal cancer continuum using multispectral imagery
Purpose
This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma.
Materials and Methods
In the proposed approach, the region of interest containing PT is first extracted from multispectral
images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models.
Results
Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%.
Conclusions
These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images
Statistical methods for tissue array images - algorithmic scoring and co-training
Recent advances in tissue microarray technology have allowed
immunohistochemistry to become a powerful medium-to-high throughput analysis
tool, particularly for the validation of diagnostic and prognostic biomarkers.
However, as study size grows, the manual evaluation of these assays becomes a
prohibitive limitation; it vastly reduces throughput and greatly increases
variability and expense. We propose an algorithm - Tissue Array Co-Occurrence
Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on
textural regularity summarized by local inter-pixel relationships. The
algorithm can be easily trained for any staining pattern, is absent of
sensitive tuning parameters and has the ability to report salient pixels in an
image that contribute to its score. Pathologists' input via informative
training patches is an important aspect of the algorithm that allows the
training for any specific marker or cell type. With co-training, the error rate
of TACOMA can be reduced substantially for a very small training sample (e.g.,
with size 30). We give theoretical insights into the success of co-training via
thinning of the feature set in a high-dimensional setting when there is
"sufficient" redundancy among the features. TACOMA is flexible, transparent and
provides a scoring process that can be evaluated with clarity and confidence.
In a study based on an estrogen receptor (ER) marker, we show that TACOMA is
comparable to, or outperforms, pathologists' performance in terms of accuracy
and repeatability.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS543 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Second-order Democratic Aggregation
Aggregated second-order features extracted from deep convolutional networks
have been shown to be effective for texture generation, fine-grained
recognition, material classification, and scene understanding. In this paper,
we study a class of orderless aggregation functions designed to minimize
interference or equalize contributions in the context of second-order features
and we show that they can be computed just as efficiently as their first-order
counterparts and they have favorable properties over aggregation by summation.
Another line of work has shown that matrix power normalization after
aggregation can significantly improve the generalization of second-order
representations. We show that matrix power normalization implicitly equalizes
contributions during aggregation thus establishing a connection between matrix
normalization techniques and prior work on minimizing interference. Based on
the analysis we present {\gamma}-democratic aggregators that interpolate
between sum ({\gamma}=1) and democratic pooling ({\gamma}=0) outperforming both
on several classification tasks. Moreover, unlike power normalization, the
{\gamma}-democratic aggregations can be computed in a low dimensional space by
sketching that allows the use of very high-dimensional second-order features.
This results in a state-of-the-art performance on several datasets
Weighted Point Cloud Augmentation for Neural Network Training Data Class-Imbalance
Recent developments in the field of deep learning for 3D data have
demonstrated promising potential for end-to-end learning directly from point
clouds. However, many real-world point clouds contain a large class im-balance
due to the natural class im-balance observed in nature. For example, a 3D scan
of an urban environment will consist mostly of road and facade, whereas other
objects such as poles will be under-represented. In this paper we address this
issue by employing a weighted augmentation to increase classes that contain
fewer points. By mitigating the class im-balance present in the data we
demonstrate that a standard PointNet++ deep neural network can achieve higher
performance at inference on validation data. This was observed as an increase
of F1 score of 19% and 25% on two test benchmark datasets; ScanNet and
Semantic3D respectively where no class im-balance pre-processing had been
performed. Our networks performed better on both highly-represented and
under-represented classes, which indicates that the network is learning more
robust and meaningful features when the loss function is not overly exposed to
only a few classes.Comment: 7 pages, 6 figures, submitted for ISPRS Geospatial Week conference
201
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
- …