102,979 research outputs found
Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction
It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.Comment: 33 pages, 12 figure
How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis
Supervised learning from high-dimensional data, e.g., multimedia data, is a challenging task. We propose an extension of slow feature analysis (SFA) for supervised dimensionality reduction called graph-based SFA (GSFA). The algorithm extracts a label-predictive low-dimensional set of features that can be post-processed by typical supervised algorithms to generate the final label or class estimation. GSFA is trained with a so-called training graph, in which the vertices are the samples and the edges represent similarities of the corresponding labels. A new weighted SFA optimization problem is introduced, generalizing the notion of slowness from sequences of samples to such training graphs. We show that GSFA computes an optimal solution to this problem in the considered function space, and propose several types of training graphs. For classification, the most straightforward graph yields features equivalent to those of (nonlinear) Fisher discriminant analysis. Emphasis is on regression, where four different graphs were evaluated experimentally with a subproblem of face detection on photographs. The method proposed is promising particularly when linear models are insufficient, as well as when feature selection is difficult
A Comparison of Relaxations of Multiset Cannonical Correlation Analysis and Applications
Canonical correlation analysis is a statistical technique that is used to
find relations between two sets of variables. An important extension in pattern
analysis is to consider more than two sets of variables. This problem can be
expressed as a quadratically constrained quadratic program (QCQP), commonly
referred to Multi-set Canonical Correlation Analysis (MCCA). This is a
non-convex problem and so greedy algorithms converge to local optima without
any guarantees on global optimality. In this paper, we show that despite being
highly structured, finding the optimal solution is NP-Hard. This motivates our
relaxation of the QCQP to a semidefinite program (SDP). The SDP is convex, can
be solved reasonably efficiently and comes with both absolute and
output-sensitive approximation quality. In addition to theoretical guarantees,
we do an extensive comparison of the QCQP method and the SDP relaxation on a
variety of synthetic and real world data. Finally, we present two useful
extensions: we incorporate kernel methods and computing multiple sets of
canonical vectors
High-Dimensional Stochastic Design Optimization by Adaptive-Sparse Polynomial Dimensional Decomposition
This paper presents a novel adaptive-sparse polynomial dimensional
decomposition (PDD) method for stochastic design optimization of complex
systems. The method entails an adaptive-sparse PDD approximation of a
high-dimensional stochastic response for statistical moment and reliability
analyses; a novel integration of the adaptive-sparse PDD approximation and
score functions for estimating the first-order design sensitivities of the
statistical moments and failure probability; and standard gradient-based
optimization algorithms. New analytical formulae are presented for the design
sensitivities that are simultaneously determined along with the moments or the
failure probability. Numerical results stemming from mathematical functions
indicate that the new method provides more computationally efficient design
solutions than the existing methods. Finally, stochastic shape optimization of
a jet engine bracket with 79 variables was performed, demonstrating the power
of the new method to tackle practical engineering problems.Comment: 18 pages, 2 figures, to appear in Sparse Grids and
Applications--Stuttgart 2014, Lecture Notes in Computational Science and
Engineering 109, edited by J. Garcke and D. Pfl\"{u}ger, Springer
International Publishing, 201
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
Model reduction for the dynamics and control of large structural systems via neutral network processing direct numerical optimization
Three neural network processing approaches in a direct numerical optimization model reduction scheme are proposed and investigated. Large structural systems, such as large space structures, offer new challenges to both structural dynamicists and control engineers. One such challenge is that of dimensionality. Indeed these distributed parameter systems can be modeled either by infinite dimensional mathematical models (typically partial differential equations) or by high dimensional discrete models (typically finite element models) often exhibiting thousands of vibrational modes usually closely spaced and with little, if any, damping. Clearly, some form of model reduction is in order, especially for the control engineer who can actively control but a few of the modes using system identification based on a limited number of sensors. Inasmuch as the amount of 'control spillover' (in which the control inputs excite the neglected dynamics) and/or 'observation spillover' (where neglected dynamics affect system identification) is to a large extent determined by the choice of particular reduced model (RM), the way in which this model reduction is carried out is often critical
Cluster-based feedback control of turbulent post-stall separated flows
We propose a novel model-free self-learning cluster-based control strategy
for general nonlinear feedback flow control technique, benchmarked for
high-fidelity simulations of post-stall separated flows over an airfoil. The
present approach partitions the flow trajectories (force measurements) into
clusters, which correspond to characteristic coarse-grained phases in a
low-dimensional feature space. A feedback control law is then sought for each
cluster state through iterative evaluation and downhill simplex search to
minimize power consumption in flight. Unsupervised clustering of the flow
trajectories for in-situ learning and optimization of coarse-grained control
laws are implemented in an automated manner as key enablers. Re-routing the
flow trajectories, the optimized control laws shift the cluster populations to
the aerodynamically favorable states. Utilizing limited number of sensor
measurements for both clustering and optimization, these feedback laws were
determined in only iterations. The objective of the present work is not
necessarily to suppress flow separation but to minimize the desired cost
function to achieve enhanced aerodynamic performance. The present control
approach is applied to the control of two and three-dimensional separated flows
over a NACA 0012 airfoil with large-eddy simulations at an angle of attack of
, Reynolds number and free-stream Mach number . The optimized control laws effectively minimize the flight power
consumption enabling the flows to reach a low-drag state. The present work aims
to address the challenges associated with adaptive feedback control design for
turbulent separated flows at moderate Reynolds number.Comment: 32 pages, 18 figure
- …