6,992 research outputs found
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
Dynamic Linear Discriminant Analysis in High Dimensional Space
High-dimensional data that evolve dynamically feature predominantly in the
modern data era. As a partial response to this, recent years have seen
increasing emphasis to address the dimensionality challenge. However, the
non-static nature of these datasets is largely ignored. This paper addresses
both challenges by proposing a novel yet simple dynamic linear programming
discriminant (DLPD) rule for binary classification. Different from the usual
static linear discriminant analysis, the new method is able to capture the
changing distributions of the underlying populations by modeling their means
and covariances as smooth functions of covariates of interest. Under an
approximate sparse condition, we show that the conditional misclassification
rate of the DLPD rule converges to the Bayes risk in probability uniformly over
the range of the variables used for modeling the dynamics, when the
dimensionality is allowed to grow exponentially with the sample size. The
minimax lower bound of the estimation of the Bayes risk is also established,
implying that the misclassification rate of our proposed rule is minimax-rate
optimal. The promising performance of the DLPD rule is illustrated via
extensive simulation studies and the analysis of a breast cancer dataset.Comment: 34 pages; 3 figure
The probabilistic neural network architecture for high speed classification of remotely sensed imagery
In this paper we discuss a neural network architecture (the Probabilistic Neural Net or the PNN) that, to the best of our knowledge, has not previously been applied to remotely sensed data. The PNN is a supervised non-parametric classification algorithm as opposed to the Gaussian maximum likelihood classifier (GMLC). The PNN works by fitting a Gaussian kernel to each training point. The width of the Gaussian is controlled by a tuning parameter called the window width. If very small widths are used, the method is equivalent to the nearest neighbor method. For large windows, the PNN behaves like the GMLC. The basic implementation of the PNN requires no training time at all. In this respect it is far better than the commonly used backpropagation neural network which can be shown to take O(N6) time for training where N is the dimensionality of the input vector. In addition the PNN can be implemented in a feed forward mode in hardware. The disadvantage of the PNN is that it requires all the training data to be stored. Some solutions to this problem are discussed in the paper. Finally, we discuss the accuracy of the PNN with respect to the GMLC and the backpropagation neural network (BPNN). The PNN is shown to be better than GMLC and not as good as the BPNN with regards to classification accuracy
The NWRA Classification Infrastructure: Description and Extension to the Discriminant Analysis Flare Forecasting System (DAFFS)
A classification infrastructure built upon Discriminant Analysis has been
developed at NorthWest Research Associates for examining the statistical
differences between samples of two known populations. Originating to examine
the physical differences between flare-quiet and flare-imminent solar active
regions, we describe herein some details of the infrastructure including:
parametrization of large datasets, schemes for handling "null" and "bad" data
in multi-parameter analysis, application of non-parametric multi-dimensional
Discriminant Analysis, an extension through Bayes' theorem to probabilistic
classification, and methods invoked for evaluating classifier success. The
classifier infrastructure is applicable to a wide range of scientific questions
in solar physics. We demonstrate its application to the question of
distinguishing flare-imminent from flare-quiet solar active regions, updating
results from the original publications that were based on different data and
much smaller sample sizes. Finally, as a demonstration of "Research to
Operations" efforts in the space-weather forecasting context, we present the
Discriminant Analysis Flare Forecasting System (DAFFS), a near-real-time
operationally-running solar flare forecasting tool that was developed from the
research-directed infrastructure.Comment: J. Space Weather Space Climate: Accepted / in press; access
supplementary materials through journal; some figures are less than full
resolution for arXi
Supervised Classification: Quite a Brief Overview
The original problem of supervised classification considers the task of
automatically assigning objects to their respective classes on the basis of
numerical measurements derived from these objects. Classifiers are the tools
that implement the actual functional mapping from these measurements---also
called features or inputs---to the so-called class label---or output. The
fields of pattern recognition and machine learning study ways of constructing
such classifiers. The main idea behind supervised methods is that of learning
from examples: given a number of example input-output relations, to what extent
can the general mapping be learned that takes any new and unseen feature vector
to its correct class? This chapter provides a basic introduction to the
underlying ideas of how to come to a supervised classification problem. In
addition, it provides an overview of some specific classification techniques,
delves into the issues of object representation and classifier evaluation, and
(very) briefly covers some variations on the basic supervised classification
task that may also be of interest to the practitioner
Classification methods for Hilbert data based on surrogate density
An unsupervised and a supervised classification approaches for Hilbert random
curves are studied. Both rest on the use of a surrogate of the probability
density which is defined, in a distribution-free mixture context, from an
asymptotic factorization of the small-ball probability. That surrogate density
is estimated by a kernel approach from the principal components of the data.
The focus is on the illustration of the classification algorithms and the
computational implications, with particular attention to the tuning of the
parameters involved. Some asymptotic results are sketched. Applications on
simulated and real datasets show how the proposed methods work.Comment: 33 pages, 11 figures, 6 table
- …