8,578 research outputs found

    A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

    Get PDF
    We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} bb of the function evaluation and 2) the local smoothness, dd, of the function. A smaller dd results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of bb and dd, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both bb and dd. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise bb and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0b=0). There, under our minimal local smoothness assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks

    A Novel Hybrid Framework for Co-Optimization of Power and Natural Gas Networks Integrated With Emerging Technologies

    Get PDF
    In a power system with high penetration of renewable power sources, gas-fired units can be considered as a back-up option to improve the balance between generation and consumption in short-term scheduling. Therefore, closer coordination between power and natural gas systems is anticipated. This article presents a novel hybrid information gap decision theory (IGDT)-stochastic cooptimization problem for integrating electricity and natural gas networks to minimize total operation cost with the penetration of wind energy. The proposed model considers not only the uncertainties regarding electrical load demand and wind power output, but also the uncertainties of gas load demands for the residential consumers. The uncertainties of electric load and wind power are handled through a scenario-based approach, and residential gas load uncertainty is handled via IGDT approach with no need for the probability density function. The introduced hybrid model enables the system operator to consider the advantages of both approaches simultaneously. The impact of gas load uncertainty associated with the residential consumers is more significant on the power dispatch of gas-fired plants and power system operation cost since residential gas load demands are prior than gas load demands of gas-fired units. The proposed framework is a bilevel problem that can be reduced to a one-level problem. Also, it can be solved by the implementation of a simple concept without the need for Karush–Kuhn–Tucker conditions. Moreover, emerging flexible energy sources such as the power to gas technology and demand response program are considered in the proposed model for increasing the wind power dispatch, decreasing the total operation cost of the integrated network as well as reducing the effect of system uncertainties on the total operating cost. Numerical results indicate the applicability and effectiveness of the proposed model under different working conditions

    Data-Dependent Stability of Stochastic Gradient Descent

    Full text link
    We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD), and employ it to develop novel generalization bounds. This is in contrast to previous distribution-free algorithmic stability results for SGD which depend on the worst-case constants. By virtue of the data-dependent argument, our bounds provide new insights into learning with SGD on convex and non-convex problems. In the convex case, we show that the bound on the generalization error depends on the risk at the initialization point. In the non-convex case, we prove that the expected curvature of the objective function around the initialization point has crucial influence on the generalization error. In both cases, our results suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization. As a corollary, our results allow us to show optimistic generalization bounds that exhibit fast convergence rates for SGD subject to a vanishing empirical risk and low noise of stochastic gradient
    • …
    corecore