269 research outputs found

    Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods

    Full text link
    In this paper, we consider the uncertainty quantification problem for regression models. Specifically, we consider an individual calibration objective for characterizing the quantiles of the prediction model. While such an objective is well-motivated from downstream tasks such as newsvendor cost, the existing methods have been largely heuristic and lack of statistical guarantee in terms of individual calibration. We show via simple examples that the existing methods focusing on population-level calibration guarantees such as average calibration or sharpness can lead to harmful and unexpected results. We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model and enjoy both computational efficiency and statistical consistency. Our approach enables a better understanding of the possibility of individual calibration, and we establish matching upper and lower bounds for the calibration error of our proposed methods. Technically, our analysis combines the nonparametric analysis with a covering number argument for parametric analysis, which advances the existing theoretical analyses in the literature of nonparametric density estimation and quantile bandit problems. Importantly, the nonparametric perspective sheds new theoretical insights into regression calibration in terms of the curse of dimensionality and reconciles the existing results on the impossibility of individual calibration. To our knowledge, we make the first effort to reach both individual calibration and finite-sample guarantee with minimal assumptions in terms of conformal prediction. Numerical experiments show the advantage of such a simple approach under various metrics, and also under covariates shift. We hope our work provides a simple benchmark and a starting point of theoretical ground for future research on regression calibration.Comment: Accepted at NeurIPS 2023 and update a camera-ready version; Add some experiments and literature review

    Predict-then-Calibrate: A New Perspective of Robust Contextual LP

    Full text link
    Contextual optimization, also known as predict-then-optimize or prescriptive analytics, considers an optimization problem with the presence of covariates (context or side information). The goal is to learn a prediction model (from the training data) that predicts the objective function from the covariates, and then in the test phase, solve the optimization problem with the covariates but without the observation of the objective function. In this paper, we consider a risk-sensitive version of the problem and propose a generic algorithm design paradigm called predict-then-calibrate. The idea is to first develop a prediction model without concern for the downstream risk profile or robustness guarantee, and then utilize calibration (or recalibration) methods to quantify the uncertainty of the prediction. While the existing methods suffer from either a restricted choice of the prediction model or strong assumptions on the underlying data, we show the disentangling of the prediction model and the calibration/uncertainty quantification has several advantages. First, it imposes no restriction on the prediction model and thus fully unleashes the potential of off-the-shelf machine learning methods. Second, the derivation of the risk and robustness guarantee can be made independent of the choice of the prediction model through a data-splitting idea. Third, our paradigm of predict-then-calibrate applies to both (risk-sensitive) robust and (risk-neutral) distributionally robust optimization (DRO) formulations. Theoretically, it gives new generalization bounds for the contextual LP problem and sheds light on the existing results of DRO for contextual LP. Numerical experiments further reinforce the advantage of the predict-then-calibrate paradigm in that an improvement on either the prediction model or the calibration model will lead to a better final performance.Comment: 30 pages, 8 figure

    Rhubarb alleviates hyperoxia induced lung injury in neonatal rats with bronchopulmonary dysplasia by inhibiting inflammation

    Get PDF
    Purpose: To investigate the effect of rhubarb on hyperoxia-induced lung injury in neonatal rats with bronchopulmonary dysplasia (BPD), and the underlying mechanism.Methods: Sixty 4-day-old neonatal rats were assigned to air control, BPD, and rhubarb intervention groups, with 20 rats in each group. Immunoblotting was employed to assay NF-κB expression. Levels of malondialdehyde (MDA) and SOD were determined spectrophotometrically, while ELISA was used to measure serum levels of IL-6, IL-8 and TNF-α.Results: The peripheral blood levels of TNF-α, IL-8 and IL-1β were markedly higher in BPD-exposed rats than in the air control rats, while peripheral blood levels of TNF-α, IL-8 and IL-1β were reduced in rhubarb intervention rats, relative to BPD-exposed rats. The activity of SOD was markedly lower in lung tissue of BPD rats than in lung tissue of air control rats, while MDA level was markedly elevated in BPD rats (p < 0.05). There was marked up-regulation of NF-κB p65 expression in BPD-exposed rats, relative to air control rats, but it was markedly lower in rhubarb intervention rats than in hyperoxia model rats (p< 0.05).Conclusion: Rhubarb mitigated hyperoxia-induced inflammation, oxidative stress and lung injury in BPD neonatal rat model by inhibiting oxidative stress and reducing the levels of inflammatory factors

    Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming

    Full text link
    In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called \textit{maximum optimality margin} which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.Comment: to be published in ICML 202

    Decomposed Soft Prompt Guided Fusion Enhancing for Compositional Zero-Shot Learning

    Full text link
    Compositional Zero-Shot Learning (CZSL) aims to recognize novel concepts formed by known states and objects during training. Existing methods either learn the combined state-object representation, challenging the generalization of unseen compositions, or design two classifiers to identify state and object separately from image features, ignoring the intrinsic relationship between them. To jointly eliminate the above issues and construct a more robust CZSL system, we propose a novel framework termed Decomposed Fusion with Soft Prompt (DFSP)1, by involving vision-language models (VLMs) for unseen composition recognition. Specifically, DFSP constructs a vector combination of learnable soft prompts with state and object to establish the joint representation of them. In addition, a cross-modal decomposed fusion module is designed between the language and image branches, which decomposes state and object among language features instead of image features. Notably, being fused with the decomposed features, the image features can be more expressive for learning the relationship with states and objects, respectively, to improve the response of unseen compositions in the pair space, hence narrowing the domain gap between seen and unseen sets. Experimental results on three challenging benchmarks demonstrate that our approach significantly outperforms other state-of-the-art methods by large margins.Comment: 10 pages included reference, conferenc

    Numerical study of stall inception in a transonic axial compressor rotor based on the throttle model

    Get PDF
    The goal of the current paper is to investigate inner flow behavior on stall inception in a transonic compressor rotor. The stall inception process is numerically carried out by unsteady 3-D simulations based on the throttle model. The current study shows that stall starts from the tip of the blade, and stall cell extends to the axial, circumferential and radial directions. Through the comparison of flow transition characteristics at different flow rate conditions, the interface between the incoming flow and tip clearance flow shifts forward to the upstream as the mass flow decreases. Eventually, the shock detaches from the blade leading edge, and tip clearance flow spills into the adjacent blade passage, thus stall happens in the affected blade passages
    corecore