246 research outputs found
Hyperparameter Estimation for Sparse Bayesian Learning Models
Sparse Bayesian Learning (SBL) models are extensively used in signal
processing and machine learning for promoting sparsity through hierarchical
priors. The hyperparameters in SBL models are crucial for the model's
performance, but they are often difficult to estimate due to the non-convexity
and the high-dimensionality of the associated objective function. This paper
presents a comprehensive framework for hyperparameter estimation in SBL models,
encompassing well-known algorithms such as the expectation-maximization (EM),
MacKay, and convex bounding (CB) algorithms. These algorithms are cohesively
interpreted within an alternating minimization and linearization (AML)
paradigm, distinguished by their unique linearized surrogate functions.
Additionally, a novel algorithm within the AML framework is introduced, showing
enhanced efficiency, especially under low signal noise ratios. This is further
improved by a new alternating minimization and quadratic approximation (AMQ)
paradigm, which includes a proximal regularization term. The paper
substantiates these advancements with thorough convergence analysis and
numerical experiments, demonstrating the algorithm's effectiveness in various
noise conditions and signal-to-noise ratios
The continuous-time pre-commitment KMM problem in incomplete markets
This paper studies the continuous-time pre-commitment KMM problem proposed by
Klibanoff, Marinacci and Mukerji (2005) in incomplete financial markets, which
concerns with the portfolio selection under smooth ambiguity. The decision
maker (DM) is uncertain about the dominated priors of the financial market,
which are characterized by a second-order distribution (SOD). The KMM model
separates risk attitudes and ambiguity attitudes apart and the aim of the DM is
to maximize the two-fold utility of terminal wealth, which does not belong to
the classical subjective utility maximization problem. By constructing the
efficient frontier, the original KMM problem is first simplified as an one-fold
expected utility problem on the second-order space. In order to solve the
equivalent simplified problem, this paper imposes an assumption and introduces
a new distorted Legendre transformation to establish the bipolar relation and
the distorted duality theorem. Then, under a further assumption that the
asymptotic elasticity of the ambiguous attitude is less than 1, the uniqueness
and existence of the solution to the KMM problem are shown and we obtain the
semi-explicit forms of the optimal terminal wealth and the optimal strategy.
Explicit forms of optimal strategies are presented for CRRA, CARA and HARA
utilities in the case of Gaussian SOD in a Black-Scholes financial market,
which show that DM with higher ambiguity aversion tends to be more concerned
about extreme market conditions with larger bias. In the end of this work,
numerical comparisons with the DMs ignoring ambiguity are revealed to
illustrate the effects of ambiguity on the optimal strategies and value
functions.Comment: 53 pages, 7 figure
Generalized Sparse Bayesian Learning and Application to Image Reconstruction
Image reconstruction based on indirect, noisy, or incomplete data remains an important yet challenging task. While methods such as compressive sensing have demonstrated high-resolution image recovery in various settings, there remain issues of robustness due to parameter tuning. Moreover, since the recovery is limited to a point estimate, it is impossible to quantify the uncertainty, which is often desirable. Due to these inherent limitations, a sparse Bayesian learning approach is sometimes adopted to recover a posterior distribution of the unknown. Sparse Bayesian learning assumes that some linear transformation of the unknown is sparse. However, most of the methods developed are tailored to specific problems, with particular forward models and priors. Here, we present a generalized approach to sparse Bayesian learning. It has the advantage that it can be used for various types of data acquisitions and prior information. Some preliminary results on image reconstruction/recovery indicate its potential use for denoising, deblurring, and magnetic resonance imaging
Fast Multiscale Functional Estimation in Optimal EMG Placement for Robotic Prosthesis Controllers
Electrocardiogram (EMG) signals play a significant role in decoding muscle
contraction information for robotic hand prosthesis controllers. Widely applied
decoders require large amount of EMG signals sensors, resulting in complicated
calculations and unsatisfactory predictions. By the biomechanical process of
single degree-of-freedom human hand movements, only several EMG signals are
essential for accurate predictions. Recently, a novel predictor of hand
movements adopts a multistage Sequential, Adaptive Functional Estimation (SAFE)
method based on historical Functional Linear Model (FLM) to select important
EMG signals and provide precise projections.
However, SAFE repeatedly performs matrix-vector multiplications with a dense
representation matrix of the integral operator for the FLM, which is
computational expansive. Noting that with a properly chosen basis, the
representation of the integral operator concentrates on a few bands of the
basis, the goal of this study is to develop a fast Multiscale SAFE (MSAFE)
method aiming at reducing computational costs while preserving (or even
improving) the accuracy of the original SAFE method. Specifically, a multiscale
piecewise polynomial basis is adopted to discretize the integral operator for
the FLM, resulting in an approximately sparse representation matrix, and then
the matrix is truncated to a sparse one. This approach not only accelerates
computations but also improves robustness against noises. When applied to real
hand movement data, MSAFE saves 85%90% computing time compared with SAFE,
while producing better sensor selection and comparable accuracy. In a
simulation study, MSAFE shows stronger stability in sensor selection and
prediction accuracy against correlated noise than SAFE
Reproducing Kernel Banach Spaces with the l1 Norm
Targeting at sparse learning, we construct Banach spaces B of functions on an
input space X with the properties that (1) B possesses an l1 norm in the sense
that it is isometrically isomorphic to the Banach space of integrable functions
on X with respect to the counting measure; (2) point evaluations are continuous
linear functionals on B and are representable through a bilinear form with a
kernel function; (3) regularized learning schemes on B satisfy the linear
representer theorem. Examples of kernel functions admissible for the
construction of such spaces are given.Comment: 28 pages, an extra section was adde
- …