11,330 research outputs found
Manifold interpolation and model reduction
One approach to parametric and adaptive model reduction is via the
interpolation of orthogonal bases, subspaces or positive definite system
matrices. In all these cases, the sampled inputs stem from matrix sets that
feature a geometric structure and thus form so-called matrix manifolds. This
work will be featured as a chapter in the upcoming Handbook on Model Order
Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A.
Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the
numerical treatment of the most important matrix manifolds that arise in the
context of model reduction. Moreover, the principal approaches to data
interpolation and Taylor-like extrapolation on matrix manifolds are outlined
and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model
Order Reduction
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
The GIST of Concepts
A unified general theory of human concept learning based on the idea that humans detect invariance patterns in categorical stimuli as a necessary precursor to concept formation is proposed and tested. In GIST (generalized invariance structure theory) invariants are detected via a perturbation mechanism of dimension suppression referred to as dimensional binding. Structural information acquired by this process is stored as a compound memory trace termed an ideotype. Ideotypes inform the subsystems that are responsible for learnability judgments, rule formation, and other types of concept representations. We show that GIST is more general (e.g., it works on continuous, semi-continuous, and binary stimuli) and makes much more accurate predictions than the leading models of concept learning difficulty,such as those based on a complexity reduction principle (e.g., number of mental models,structural invariance, algebraic complexity, and minimal description length) and those based on selective attention and similarity (GCM, ALCOVE, and SUSTAIN). GIST unifies these two key aspects of concept learning and categorization. Empirical evidence from three\ud
experiments corroborates the predictions made by the theory and its core model which we propose as a candidate law of human conceptual behavior
- …