74,345 research outputs found
Blockout: Dynamic Model Selection for Hierarchical Deep Networks
Most deep architectures for image classification--even those that are trained
to classify a large number of diverse categories--learn shared image
representations with a single model. Intuitively, however, categories that are
more similar should share more information than those that are very different.
While hierarchical deep networks address this problem by learning separate
features for subsets of related categories, current implementations require
simplified models using fixed architectures specified via heuristic clustering
methods. Instead, we propose Blockout, a method for regularization and model
selection that simultaneously learns both the model architecture and
parameters. A generalization of Dropout, our approach gives a novel
parametrization of hierarchical architectures that allows for structure
learning via back-propagation. To demonstrate its utility, we evaluate Blockout
on the CIFAR and ImageNet datasets, demonstrating improved classification
accuracy, better regularization performance, faster training, and the clear
emergence of hierarchical network structures
A Review of Codebook Models in Patch-Based Visual Object Recognition
The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods
Dynamic quantum clustering: a method for visual exploration of structures in data
A given set of data-points in some feature space may be associated with a
Schrodinger equation whose potential is determined by the data. This is known
to lead to good clustering solutions. Here we extend this approach into a
full-fledged dynamical scheme using a time-dependent Schrodinger equation.
Moreover, we approximate this Hamiltonian formalism by a truncated calculation
within a set of Gaussian wave functions (coherent states) centered around the
original points. This allows for analytic evaluation of the time evolution of
all such states, opening up the possibility of exploration of relationships
among data-points through observation of varying dynamical-distances among
points and convergence of points into clusters. This formalism may be further
supplemented by preprocessing, such as dimensional reduction through singular
value decomposition or feature filtering.Comment: 15 pages, 9 figure
Joint Modeling and Registration of Cell Populations in Cohorts of High-Dimensional Flow Cytometric Data
In systems biomedicine, an experimenter encounters different potential
sources of variation in data such as individual samples, multiple experimental
conditions, and multi-variable network-level responses. In multiparametric
cytometry, which is often used for analyzing patient samples, such issues are
critical. While computational methods can identify cell populations in
individual samples, without the ability to automatically match them across
samples, it is difficult to compare and characterize the populations in typical
experiments, such as those responding to various stimulations or distinctive of
particular patients or time-points, especially when there are many samples.
Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous
modeling and registration of populations across a cohort. JCM models every
population with a robust multivariate probability distribution. Simultaneously,
JCM fits a random-effects model to construct an overall batch template -- used
for registering populations across samples, and classifying new samples. By
tackling systems-level variation, JCM supports practical biomedical
applications involving large cohorts
- …