406 research outputs found

    Estimation of high-dimensional low-rank matrices

    Full text link
    Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m×Tm\times T-matrix AA corrupted by noise. We are particularly interested in the high-dimensional setting where the number mTmT of unknown entries can be much larger than the sample size NN. Motivated by several applications, we consider estimation of matrix AA under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-pp quasi-norm penalty term, p≤1p\leq1. We study these estimators under two possible assumptions---a modified version of the restricted isometry condition and a uniform bound on the ratio "empirical norm induced by the sampling operator/Frobenius norm." The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-qq risk of the estimators, where q∈[p,2]q\in[p,2]. The rates that we obtain for the prediction risk are of the form rm/Nrm/N (for m=Tm=T), up to logarithmic factors, where rr is the rank of AA. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kkth entropy numbers of the quasi-convex Schatten class embeddings SpM↪S2MS_p^M\hookrightarrow S_2^M, p<1p<1, which are of independent interest.Comment: Published in at http://dx.doi.org/10.1214/10-AOS860 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Quadruple Neutrosophic Theory And Applications Volume I

    Get PDF
    Neutrosophic set has been derived from a new branch of philosophy, namely Neutrosophy. Neutrosophic set is capable of dealing with uncertainty, indeterminacy and inconsistent information. Neutrosophic set approaches are suitable to modeling problems with uncertainty, indeterminacy and inconsistent information in which human knowledge is necessary, and human evaluation is needed. Neutrosophic set theory firstly proposed in 1998 by Florentin Smarandache, who also developed the concept of single valued neutrosophic set, oriented towards real world scientific and engineering applications. Since then, the single valued neutrosophic set theory has been extensively studied in books and monographs introducing neutrosophic sets and its applications, by many authors around the world. Also, an international journal - Neutrosophic Sets and Systems started its journey in 2013. Smarandache introduce for the first time the neutrosophic quadruple numbers (of the form + + + ) and the refined neutrosophic quadruple numbers

    Multivariate Shortfall Risk Allocation and Systemic Risk

    Full text link
    The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk measurement are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk. In particular, we study the interplay between the loss function and the dependence structure of the components. Moreover, we address the computational aspects of risk allocation. Finally, we apply this methodology to the allocation of the default fund of a CCP on real data.Comment: Code, results and figures can also be consulted at https://github.com/yarmenti/MSR

    Elastic-Net Regularization in Learning Theory

    Get PDF
    Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements ({\em features}) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ``{\em elastic-net representation}'' of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed by Zou and HastieComment: 32 pages, 3 figure

    Learning weights in the generalized OWA operators

    Full text link
    This paper discusses identification of parameters of generalized ordered weighted averaging (GOWA) operators from empirical data. Similarly to ordinary OWA operators, GOWA are characterized by a vector of weights, as well as the power to which the arguments are raised. We develop optimization techniques which allow one to fit such operators to the observed data. We also generalize these methods for functional defined GOWA and generalized Choquet integral based aggregation operators.<br /

    Machine Learning (ML) module

    Get PDF
    Lectures notes of the machine learning content of the course TOML (Topics on Optimization and Machine Learning) at Master in Innovation and Research in Informatics (MIRI) at FIB, UPC.2023/202

    Double Backpropagation with Applications to Robustness and Saliency Map Interpretability

    Get PDF
    This thesis is concerned with works in connection to double backpropagation, which is a phenomenon that arises when first-order optimization methods are applied to a neural network's loss function, if this contains derivatives. Its connection to robustness and saliency map interpretability is explained
    • …
    corecore