882 research outputs found
Unsupervised 3D Learning for Shape Analysis via Multiresolution Instance Discrimination
Although unsupervised feature learning has demonstrated its advantages to
reducing the workload of data labeling and network design in many fields,
existing unsupervised 3D learning methods still cannot offer a generic network
for various shape analysis tasks with competitive performance to supervised
methods. In this paper, we propose an unsupervised method for learning a
generic and efficient shape encoding network for different shape analysis
tasks. The key idea of our method is to jointly encode and learn shape and
point features from unlabeled 3D point clouds. For this purpose, we adapt
HR-Net to octree-based convolutional neural networks for jointly encoding shape
and point features with fused multiresolution subnetworks and design a
simple-yet-efficient Multiresolution Instance Discrimination (MID) loss for
jointly learning the shape and point features. Our network takes a 3D point
cloud as input and output both shape and point features. After training, the
network is concatenated with simple task-specific back-end layers and
fine-tuned for different shape analysis tasks. We evaluate the efficacy and
generality of our method and validate our network and loss design with a set of
shape analysis tasks, including shape classification, semantic shape
segmentation, as well as shape registration tasks. With simple back-ends, our
network demonstrates the best performance among all unsupervised methods and
achieves competitive performance to supervised methods, especially in tasks
with a small labeled dataset. For fine-grained shape segmentation, our method
even surpasses existing supervised methods by a large margin.Comment: Accepted by AAAI 2021. Code:
https://github.com/microsoft/O-CNN/blob/master/docs/unsupervised.m
Histopathological image analysis : a review
Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe
Recommended from our members
3D Shape Understanding and Generation
In recent years, Machine Learning techniques have revolutionized solutions to longstanding image-based problems, like image classification, generation, semantic segmentation, object detection and many others. However, if we want to be able to build agents that can successfully interact with the real world, those techniques need to be capable of reasoning about the world as it truly is: a tridimensional space. There are two main challenges while handling 3D information in machine learning models. First, it is not clear what is the best 3D representation. For images, convolutional neural networks (CNNs) operating on raster images yield the best results in virtually all image-based benchmarks. For 3D data, the best combination of model and representation is still an open question. Second, 3D data is not available on the same scale as images – taking pictures is a common procedure in our daily lives, whereas capturing 3D content is an activity usually restricted to specialized professionals. This thesis is focused on addressing both of these issues. Which model and representation should we use for generating and recognizing 3D data? What are efficient ways of learning 3D representations from a few examples? Is it possible to leverage image data to build models capable of reasoning about the world in 3D?
Our research findings show that it is possible to build models that efficiently generate 3D shapes as irregularly structured representations. Those models require significantly less memory while generating higher quality shapes than the ones based on voxels and multi-view representations. We start by developing techniques to generate shapes represented as point clouds. This class of models leads to high quality reconstructions and better unsupervised feature learning. However, since point clouds are not amenable to editing and human manipulation, we also present models capable of generating shapes as sets of shape handles -- simpler primitives that summarize complex 3D shapes and were specifically designed for high-level tasks and user interaction. Despite their effectiveness, those approaches require some form of 3D supervision, which is scarce. We present multiple alternatives to this problem. First, we investigate how approximate convex decomposition techniques can be used as self-supervision to improve recognition models when only a limited number of labels are available. Second, we study how neural network architectures induce shape priors that can be used in multiple reconstruction tasks -- using both volumetric and manifold representations. In this regime, reconstruction is performed from a single example -- either a sparse point cloud or multiple silhouettes. Finally, we demonstrate how to train generative models of 3D shapes without using any 3D supervision by combining differentiable rendering techniques and Generative Adversarial Networks
One-shot learning of object categories
Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully
Texture analysis and Its applications in biomedical imaging: a survey
Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity
and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications.
This survey’s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this survey’s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021;
date of current version January 24, 2022. This work was supported in
part by the Portuguese Foundation for Science and Technology (FCT)
under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by
FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio
In-painting Radiography Images for Unsupervised Anomaly Detection
We propose space-aware memory queues for in-painting and detecting anomalies
from radiography images (abbreviated as SQUID). Radiography imaging protocols
focus on particular body regions, therefore producing images of great
similarity and yielding recurrent anatomical structures across patients. To
exploit this structured information, our SQUID consists of a new Memory Queue
and a novel in-painting block in the feature space. We show that SQUID can
taxonomize the ingrained anatomical structures into recurrent patterns; and in
the inference, SQUID can identify anomalies (unseen/modified patterns) in the
image. SQUID surpasses the state of the art in unsupervised anomaly detection
by over 5 points on two chest X-ray benchmark datasets. Additionally, we have
created a new dataset (DigitAnatomy), which synthesizes the spatial correlation
and consistent shape in chest anatomy. We hope DigitAnatomy can prompt the
development, evaluation, and interpretability of anomaly detection methods,
particularly for radiography imaging.Comment: Main paper with appendi
- …