1,571 research outputs found

    A discrepancy principle for the Landweber iteration based on risk minimization

    Get PDF
    In this paper we propose a criterion based on risk minimization to stop the Landweber algorithm for estimating the solution of a linear system with noisy data. Under the hypothesis of white Gaussian noise, we provide an unbiased estimator of the risk and we use it for defining a variant of the classical discrepancy principle. Moreover, we prove that the proposed variant satisfies the regularization property in expectation. Finally, we perform some numerical simulations when the signal formation model is given by a convolution or a Radon transform, to show that the proposed method is numerically reliable and furnishes slightly better solutions than classical estimators based on the predictive risk, namely the Unbiased Predictive Risk Estimator and the Generalized Cross Validation

    Compression, Generalization and Learning

    Full text link
    A compression function is a map that slims down an observational set into a subset of reduced size, while preserving its informational content. In multiple applications, the condition that one new observation makes the compressed set change is interpreted that this observation brings in extra information and, in learning theory, this corresponds to misclassification, or misprediction. In this paper, we lay the foundations of a new theory that allows one to keep control on the probability of change of compression (which maps into the statistical "risk" in learning applications). Under suitable conditions, the cardinality of the compressed set is shown to be a consistent estimator of the probability of change of compression (without any upper limit on the size of the compressed set); moreover, unprecedentedly tight finite-sample bounds to evaluate the probability of change of compression are obtained under a generally applicable condition of preference. All results are usable in a fully agnostic setup, i.e., without requiring any a priori knowledge on the probability distribution of the observations. Not only these results offer a valid support to develop trust in observation-driven methodologies, they also play a fundamental role in learning techniques as a tool for hyper-parameter tuning.Comment: https://www.jmlr.org/papers/v24/22-0605.htm

    Non-convex scenario optimization

    Get PDF
    Scenario optimization is an approach to data-driven decision-making that has been introduced some fifteen years ago and has ever since then grown fast. Its most remarkable feature is that it blends the heuristic nature of data-driven methods with a rigorous theory that allows one to gain factual, reliable, insight in the solution. The usability of the scenario theory, however, has been restrained thus far by the obstacle that most results are standing on the assumption of convexity. With this paper, we aim to free the theory from this limitation. Specifically, we focus on the body of results that are known under the name of “wait-and-judge” and show that its fundamental achievements maintain their validity in a non-convex setup. While optimization is a major center of attention, this paper travels beyond it and into data-driven decision making. Adopting such a broad framework opens the door to building a new theory of truly vast applicability

    Lattice Simulation of Nuclear Multifragmentation

    Full text link
    Motivated by the decade-long debate over the issue of criticality supposedly observed in nuclear multifragmentation, we propose a dynamical lattice model to simulate the phenomenon. Its Ising Hamiltonian mimics a short range attractive interaction which competes with a thermal-like dissipative process. The results here presented, generated through an event-by-event analysis, are in agreement with both experiment and those produced by a percolative (non-dynamical) model.Comment: 8 pages, 3 figure

    Sign-Perturbed Sums (SPS) with Asymmetric Noise: Robustness Analysis and Robustification Techniques

    Get PDF
    Sign-Perturbed Sums (SPS) is a recently developed finite sample system identification method that can build exact confidence regions for linear regression problems under mild statistical assumptions. The regions are well-shaped, e.g., they are centred around the least-squares (LS) estimate, star-convex and strongly consistent. One of the main assumptions of SPS is that the distribution of the noise terms are symmetric about zero. This paper analyses how robust SPS is with respect to the violation of this assumption and how it could be robustified with respect to non-symmetric noises. First, some alternative solutions are overviewed, then a robustness analysis is performed resulting in a robustified version of SPS. We also suggest a modification of SPS, called LAD-SPS, which builds exact confidence regions around the least-absolute deviation (LAD) estimate instead of the LS estimate. LAD-SPS requires less assumptions as the noise needs only to have a conditionally zero median (w.r.t. the past). Furthermore, that approach can also be robustified using similar ideas as in the LS-SPS case. Finally, some numerical experiments are presented
    • …
    corecore