176 research outputs found
Contributions to Nonparametric Predictive Inference for Bernoulli Data with Applications in Finance
Imprecise probability is a more general probability theory which has many advantages over precise probability theory in uncertainty quantification. Many statistical methodologies within imprecise probability framework have been developed today, one of which is nonparametric predicted inference (NPI). NPI has been developed to handle various data types and has many successful applications in different fields.
This thesis firstly further developed NPI for Bernoulli data to address two current challenging issues, the computation of imprecise expectation for a general function of multiple future stages observations and handling of imprecise Bernoulli data. To achieve the former, we introduce the concept of the mass function from Weichselberger's axiomatization of imprecise probability theory and Dempster-Shafer's notion of basic probability assignment. Based on the concept of mass function, an algorithm to find the imprecise expectation measure for a general function of a finite random variable is proposed. We then construct mass functions of single and multiple future stages observations in NPI for Bernoulli data by its underlying latent variable representation, which leads to the applicability of the proposed algorithm in NPI for Bernoulli data. To achieve the latter, we extend the original NPI path counting method in its underlying lattice representation. This leads to the development of mass function and the imprecise probabilities of NPI for imprecise Bernoulli data. The property of NPI for imprecise Bernoulli data is illustrated with a numerical example.
Subsequently, under the binomial tree model, NPI for Bernoulli data and imprecise data are applied to asset and European options trading and NPI for Bernoulli data is applied to portfolio assessment. The performances of both applications are evaluated via simulations. The predictive nature and ability of noise recognition of NPI for precise and imprecise Bernoulli data are validated. The viability for application of NPI in portfolio assessment is confirmed
Recommended from our members
Zero attracting recursive least squares algorithms
The l1-norm sparsity constraint is a widely used
technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the
system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach
Locality Preserving Projections for Grassmann manifold
Learning on Grassmann manifold has become popular in many computer vision
tasks, with the strong capability to extract discriminative information for
imagesets and videos. However, such learning algorithms particularly on
high-dimensional Grassmann manifold always involve with significantly high
computational cost, which seriously limits the applicability of learning on
Grassmann manifold in more wide areas. In this research, we propose an
unsupervised dimensionality reduction algorithm on Grassmann manifold based on
the Locality Preserving Projections (LPP) criterion. LPP is a commonly used
dimensionality reduction algorithm for vector-valued data, aiming to preserve
local structure of data in the dimension-reduced space. The strategy is to
construct a mapping from higher dimensional Grassmann manifold into the one in
a relative low-dimensional with more discriminative capability. The proposed
method can be optimized as a basic eigenvalue problem. The performance of our
proposed method is assessed on several classification and clustering tasks and
the experimental results show its clear advantages over other Grassmann based
algorithms.Comment: Accepted by IJCAI 201
Recommended from our members
l1-norm penalized orthogonal forward regression
A l1-norm penalized orthogonal forward regression (l1-POFR) algorithm is proposed based on the concept
of leave-one-out mean square error (LOOMSE), by defining a new l1-norm penalized cost function in
the constructed orthogonal space and associating each orthogonal basis with an individually tunable
regularization parameter. Due to orthogonality, the LOOMSE can be analytically computed without
actually splitting the data set, and moreover a closed form of the optimal regularization parameter
is derived by greedily minimizing the LOOMSE incrementally. We also propose a simple formula for
adaptively detecting and removing regressors to an inactive set so that the computational cost of the
algorithm is significantly reduced. Examples are included to demonstrate the effectiveness of this new
l1-POFR approach
Recommended from our members
Sparse density estimation on the multinomial manifold
A new sparse kernel density estimator is introduced based
on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the
multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density
estimators
- …