100 research outputs found

    Variational Analysis of Constrained M-Estimators

    Get PDF
    We propose a unified framework for establishing existence of nonparametric M-estimators, computing the corresponding estimates, and proving their strong consistency when the class of functions is exceptionally rich. In particular, the framework addresses situations where the class of functions is complex involving information and assumptions about shape, pointwise bounds, location of modes, height at modes, location of level-sets, values of moments, size of subgradients, continuity, distance to a "prior" function, multivariate total positivity, and any combination of the above. The class might be engineered to perform well in a specific setting even in the presence of little data. The framework views the class of functions as a subset of a particular metric space of upper semicontinuous functions under the Attouch-Wets distance. In addition to allowing a systematic treatment of numerous M-estimators, the framework yields consistency of plug-in estimators of modes of densities, maximizers of regression functions, level-sets of classifiers, and related quantities, and also enables computation by means of approximating parametric classes. We establish consistency through a one-sided law of large numbers, here extended to sieves, that relaxes assumptions of uniform laws, while ensuring global approximations even under model misspecification

    Log-Concave Duality in Estimation and Control

    Full text link
    In this paper we generalize the estimation-control duality that exists in the linear-quadratic-Gaussian setting. We extend this duality to maximum a posteriori estimation of the system's state, where the measurement and dynamical system noise are independent log-concave random variables. More generally, we show that a problem which induces a convex penalty on noise terms will have a dual control problem. We provide conditions for strong duality to hold, and then prove relaxed conditions for the piecewise linear-quadratic case. The results have applications in estimation problems with nonsmooth densities, such as log-concave maximum likelihood densities. We conclude with an example reconstructing optimal estimates from solutions to the dual control problem, which has implications for sharing solution methods between the two types of problems

    Solving equilibrium problems in economies with financial markets, home production, and retention

    Full text link
    We propose a new methodology to compute equilibria for general equilibrium problems on exchange economies with real financial markets, home-production, and retention. We demonstrate that equilibrium prices can be determined by solving a related maxinf-optimization problem. We incorporate the non-arbitrage condition for financial markets into the equilibrium formulation and establish the equivalence between solutions to both problems. This reduces the complexity of the original by eliminating the need to directly compute financial contract prices, allowing us to calculate equilibria even in cases of incomplete financial markets. We also introduce a Walrasian bifunction that captures the imbalances and show that maxinf-points of this function correspond to equilibrium points. Moreover, we demonstrate that every equilibrium point can be approximated by a limit of maxinf points for a family of perturbed problems, by relying on the notion of lopsided convergence. Finally, we propose an augmented Walrasian algorithm and present numerical examples to illustrate the effectiveness of this approach. Our methodology allows for efficient calculation of equilibria in a variety of exchange economies and has potential applications in finance and economics

    Fusion of Hard and Soft Information in Nonparametric Density Estimation

    Get PDF
    This article discusses univariate density estimation in situations when the sample (hard information) is supplemented by “soft” information about the random phenomenon. These situations arise broadly in operations research and management science where practical and computational reasons severely limit the sample size, but problem structure and past experiences could be brought in. In particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum likelihood estimator that incorporates any, possibly random, soft information through an arbitrary collection of constraints. We illustrate the breadth of possibilities by discussing soft information about shape, support, continuity, smoothness, slope, location of modes, symmetry, density values, neighborhood of known density, moments, and distribution functions. The maximization takes place over spaces of extended real-valued semicontinuous functions and therefore allows us to consider essentially any conceivable density as well as convenient exponential transformations. The infinite dimensionality of the optimization problem is overcome by approximating splines tailored to these spaces. To facilitate the treatment of small samples, the construction of these splines is decoupled from the sample. We discuss existence and uniqueness of the estimator, examine consistency under increasing hard and soft information, and give rates of convergence. Numerical examples illustrate the value of soft information, the ability to generate a family of diverse densities, and the effect of misspecification of soft information.U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-0273U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-027

    Sublinear upper bounds for stochastic programs with recourse

    Full text link
    Separable sublinear functions are used to provide upper bounds on the recourse function of a stochastic program. The resulting problem's objective involves the inf-convolution of convex functions. A dual of this problem is formulated to obtain an implementable procedure to calculate the bound. Function evaluations for the resulting convex program only require a small number of single integrations in contrast with previous upper bounds that require a number of function evaluations that grows exponentially in the number of random variables. The sublinear bound can often be used when other suggested upper bounds are intractable. Computational results indicate that the sublinear approximation provides good, efficient bounds on the stochastic program objective value.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47918/1/10107_2005_Article_BF01582286.pd

    Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

    Get PDF
    We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. We report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds

    A new approximation method for generating day-ahead load scenarios

    Get PDF
    Unit commitment decisions made in the day-ahead market and resource adequacy assessment processes are based on forecasts of load, which depends strongly on weather. Two major sources of uncertainty in the load forecast are the errors in the day-ahead weather forecast and the variability in temporal patterns of electricity demand that is not explained by weather. We develop a stochastic model for hourly load on a given day, within a segment of similar days, based on a weather forecast available on the previous day. Identification of similar days in the past is based on weather forecasts and temporal load patterns. Trends and error distributions for the load forecasts are approximated by optimizing within a new class of functions specified by a finite number of parameters. Preliminary numerical results are presented based on data corresponding to a U.S. independent system operator

    Toward scalable stochastic unit commitment. Part 1: load scenario generation

    Get PDF
    Unit commitment decisions made in the day-ahead market and during subsequent reliability assessments are critically based on forecasts of load. Tra- ditional, deterministic unit commitment is based on point or expectation-based load forecasts. In contrast, stochastic unit commitment relies on multiple load sce- narios, with associated probabilities, that in aggregate capture the range of likely load time-series. The shift from point-based to scenario-based forecasting necessi- tates a shift in forecasting technologies, to provide accurate inputs to stochastic unit commitment. In this paper, we discuss a novel scenario generation method- ology for load forecasting in stochastic unit commitment, with application to real data associated with the Independent System Operator for New England (ISO- NE). The accuracy of the expected scenario generated using our methodology is consistent with that of point forecasting methods. The resulting sets of realistic scenarios serve as input to rigorously test the scalability of stochastic unit com- mitment solvers, as described in the companion paper. The scenarios generated by our method are available as an online supplement to this paper, as part of a novel, publicly available large-scale stochastic unit commitment benchmark

    Toward scalable stochastic unit commitment. Part 2: Solver Configuration and Performance Assessment

    Get PDF
    In this second portion of a two-part analysis of a scalable computa- tional approach to stochastic unit commitment, we focus on solving stochastic mixed-integer programs in tractable run-times. Our solution technique is based on Rockafellar and Wets\u27 progressive hedging algorithm, a scenario-based decomposi- tion strategy for solving stochastic programs. To achieve high-quality solutions in tractable run-times, we describe critical, novel customizations of the progressive hedging algorithm for stochastic unit commitment. Using a variant of the WECC- 240 test case with 85 thermal generation units, we demonstrate the ability of our approach to solve realistic, moderate-scale stochastic unit commitment problems with reasonable numbers of scenarios in no more than 15 minutes of wall clock time on commodity compute platforms. Further, we demonstrate that the result- ing solutions are high-quality, with costs typically within 1-2.5% of optimal. For larger test cases with 170 and 340 thermal generators, we are able to obtain solu- tions of identical quality in no more than 25 minutes of wall clock time. A major component of our contribution is the public release of the optimization model, as- sociated test cases, and algorithm results, in order to establish a rigorous baseline for both solution quality and run times of stochastic unit commitment solvers
    • …
    corecore