7,462 research outputs found
Data-adaptive harmonic spectra and multilayer Stuart-Landau models
Harmonic decompositions of multivariate time series are considered for which
we adopt an integral operator approach with periodic semigroup kernels.
Spectral decomposition theorems are derived that cover the important cases of
two-time statistics drawn from a mixing invariant measure.
The corresponding eigenvalues can be grouped per Fourier frequency, and are
actually given, at each frequency, as the singular values of a cross-spectral
matrix depending on the data. These eigenvalues obey furthermore a variational
principle that allows us to define naturally a multidimensional power spectrum.
The eigenmodes, as far as they are concerned, exhibit a data-adaptive character
manifested in their phase which allows us in turn to define a multidimensional
phase spectrum.
The resulting data-adaptive harmonic (DAH) modes allow for reducing the
data-driven modeling effort to elemental models stacked per frequency, only
coupled at different frequencies by the same noise realization. In particular,
the DAH decomposition extracts time-dependent coefficients stacked by Fourier
frequency which can be efficiently modeled---provided the decay of temporal
correlations is sufficiently well-resolved---within a class of multilayer
stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators.
Applications to the Lorenz 96 model and to a stochastic heat equation driven
by a space-time white noise, are considered. In both cases, the DAH
decomposition allows for an extraction of spatio-temporal modes revealing key
features of the dynamics in the embedded phase space. The multilayer
Stuart-Landau models (MSLMs) are shown to successfully model the typical
patterns of the corresponding time-evolving fields, as well as their statistics
of occurrence.Comment: 26 pages, double columns; 15 figure
Categorical invariance and structural complexity in human concept learning
An alternative account of human concept learning based on an invariance measure of the categorical\ud
stimulus is proposed. The categorical invariance model (CIM) characterizes the degree of structural\ud
complexity of a Boolean category as a function of its inherent degree of invariance and its cardinality or\ud
size. To do this we introduce a mathematical framework based on the notion of a Boolean differential\ud
operator on Boolean categories that generates the degrees of invariance (i.e., logical manifold) of the\ud
category in respect to its dimensions. Using this framework, we propose that the structural complexity\ud
of a Boolean category is indirectly proportional to its degree of categorical invariance and directly\ud
proportional to its cardinality or size. Consequently, complexity and invariance notions are formally\ud
unified to account for concept learning difficulty. Beyond developing the above unifying mathematical\ud
framework, the CIM is significant in that: (1) it precisely predicts the key learning difficulty ordering of\ud
the SHJ [Shepard, R. N., Hovland, C. L.,&Jenkins, H. M. (1961). Learning and memorization of classifications.\ud
Psychological Monographs: General and Applied, 75(13), 1-42] Boolean category types consisting of three\ud
binary dimensions and four positive examples; (2) it is, in general, a good quantitative predictor of the\ud
degree of learning difficulty of a large class of categories (in particular, the 41 category types studied\ud
by Feldman [Feldman, J. (2000). Minimization of Boolean complexity in human concept learning. Nature,\ud
407, 630-633]); (3) it is, in general, a good quantitative predictor of parity effects for this large class of\ud
categories; (4) it does all of the above without free parameters; and (5) it is cognitively plausible (e.g.,\ud
cognitively tractable)
Generalization of form in visual pattern classification.
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question
The GIST of Concepts
A unified general theory of human concept learning based on the idea that humans detect invariance patterns in categorical stimuli as a necessary precursor to concept formation is proposed and tested. In GIST (generalized invariance structure theory) invariants are detected via a perturbation mechanism of dimension suppression referred to as dimensional binding. Structural information acquired by this process is stored as a compound memory trace termed an ideotype. Ideotypes inform the subsystems that are responsible for learnability judgments, rule formation, and other types of concept representations. We show that GIST is more general (e.g., it works on continuous, semi-continuous, and binary stimuli) and makes much more accurate predictions than the leading models of concept learning difficulty,such as those based on a complexity reduction principle (e.g., number of mental models,structural invariance, algebraic complexity, and minimal description length) and those based on selective attention and similarity (GCM, ALCOVE, and SUSTAIN). GIST unifies these two key aspects of concept learning and categorization. Empirical evidence from three\ud
experiments corroborates the predictions made by the theory and its core model which we propose as a candidate law of human conceptual behavior
Parametric bootstrap approximation to the distribution of EBLUP and related prediction intervals in linear mixed models
Empirical best linear unbiased prediction (EBLUP) method uses a linear mixed
model in combining information from different sources of information. This
method is particularly useful in small area problems. The variability of an
EBLUP is traditionally measured by the mean squared prediction error (MSPE),
and interval estimates are generally constructed using estimates of the MSPE.
Such methods have shortcomings like under-coverage or over-coverage, excessive
length and lack of interpretability. We propose a parametric bootstrap approach
to estimate the entire distribution of a suitably centered and scaled EBLUP.
The bootstrap histogram is highly accurate, and differs from the true EBLUP
distribution by only , where is the number of parameters
and the number of observations. This result is used to obtain highly
accurate prediction intervals. Simulation results demonstrate the superiority
of this method over existing techniques of constructing prediction intervals in
linear mixed models.Comment: Published in at http://dx.doi.org/10.1214/07-AOS512 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …