47 research outputs found

    Double smoothing technique for infinite-dimensional optimization problems with applications to optimal control

    Get PDF
    In this paper, we propose an efficient technique for solving some infinite-dimensional problems over the sets of functions of time. In our problem, besides the convex point-wise constraints on state variables, we have convex coupling constraints with finite-dimensional image. Hence, we can formulate a finite-dimensional dual problem, which can be solved by efficient gradient methods. We show that it is possible to reconstruct an approximate primal solution. In order to accelerate our schemes, we apply double-smoothing technique. As a result, our method has complexity O (1/[epsilon] ln 1/[epsilon]) gradient iterations, where [epsilon] is the desired accuracy of the solution of the primal-dual problem. Our approach covers, in particular, the optimal control problems with trajectory governed by a system of ordinary differential equations. The additional requirement could be that the trajectory crosses in certain moments of time some convex sets.convex optimization, optimal control, fast gradient methods, complexity bounds, smoothing technique

    First-order methods of smooth convex optimization with inexact oracle

    Get PDF
    In this paper, we analyze different first-order methods of smooth convex optimization employing inexact first-order information. We introduce the notion of an approximate first-order oracle. The list of examples of such an oracle includes smoothing technique, Moreau-Yosida regularization, Modified Lagrangians, and many others. For different methods, we derive complexity estimates and study the dependence of the desired accuracy in the objective function and the accuracy of the oracle. It appears that in inexact case, the superiority of the fast gradient methods over the classical ones is not anymore absolute. Contrary to the simple gradient schemes, fast gradient methods necessarily suffer from accumulation of errors. Thus, the choice of the method depends both on desired accuracy and accuracy of the oracle. We present applications of our results to smooth convex-concave saddle point problems, to the analysis of Modified Lagrangians, to the prox-method, and some others.smooth convex optimization, first-order methods, inexact oracle, gradient methods, fast gradient methods, complexity bounds

    E-CLOUD, the open microgrid in existing network infrastructure

    Full text link
    peer reviewedThe main goal of the E-Cloud, as with every microgrid, is to maximize the consumption of energy produced locally. To reach this goal, based on consumption profiles of customers willing to participate in the E-cloud and given some local restrictions (e.g. wind turbines cannot be put everywhere), an optimal mix of green generation sources (in kW) and local storage (in kWh) needs to be computed. Then according to this computation, the required generating units and storage device are installed. A repartition mechanism grants the customer a share of the generated electricity and storage capacity. These shares are either computed offline, or dynamically adapted on line. The project will test two models: either the DSO or a producer owns and operates the storage device. Two flows of information (real-time for operation of the storage facility and ex-post for its settlement) are needed to correctly manage the E-Cloud and to ensure correct information exchange with the wholesale market. These information flows are completed thanks to a forecast that provides members of the E-Cloud the full capability to anticipate and obtain the maximum benefits of the local generation. The expected benefits for the customer are a reduction of their electricity bill by a minimum of 10%. Societal benefits should also arise: 1) easing the technical integration of renewables’ generation embedded in the distribution network, and 2) avoids extra investment on the DSO network. The E-Cloud may also ensure new revenue for the DSO thanks to new services provided to the E-Cloud community

    Laser control of ultracold molecule formation: The case of RbSr

    Full text link
    We have studied the formation of ultracold RbSr molecules with laser pulses. After discussing the advantages of the Mott insulator phase for the control with pulses, we present two classes of strategies. The first class involves two electronic states. Two extensions of stimulated Raman adiabatic passage (STIRAP) for multi-level transitions are used : alternating STIRAP (A-STIRAP) and straddle STIRAP (S-STIRAP). Both transfer dynamics are modeled and compared. The second class of strategies involves only the electronic ground state and uses infrared (IR)/TeraHertz (THz) pulses. The chemical bond is first created by the application of a THz chirped pulse or π\pi-pulse. Subsequently, the molecules are transferred to their ro-vibrational ground state using IR pulses. For this last step, different optimized pulse sequences through optimal control techniques, have been studied. The relative merits of these strategies in terms of efficiency and robustness are discussed within the experimental feasibility criteria of present laser technology

    A compact model for magnetic tunnel junction (MTJ) switched by thermally assisted Spin transfer torque (TAS + STT)

    Get PDF
    Thermally assisted spin transfer torque [TAS + STT] is a new switching approach for magnetic tunnel junction [MTJ] nanopillars that represents the best trade-off between data reliability, power efficiency and density. In this paper, we present a compact model for MTJ switched by this approach, which integrates a number of physical models such as temperature evaluation and STT dynamic switching models. Many experimental parameters are included directly to improve the simulation accuracy. It is programmed in the Verilog-A language and compatible with the standard IC CAD tools, providing an easy parameter configuration interface and allowing high-speed co-simulation of hybrid MTJ/CMOS circuits

    Stochastic first order methods in smooth convex optimization

    Get PDF
    In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth function and we develop new practical and efficient stepsizes policies. Based on the machinery of estimates sequences functions, we develop also two new methods, a Stochastic Dual Gradient Method and an accelerated Stochastic Fast Gradient Method. Convergence rates on average, probabilities of large deviations and accuracy certificates are studied. All of these methods are designed in order to decrease the effect of the stochastic noise at an unimprovable rate and to be easily implementable in practice (the practical efficiency of our method is confirmed by numerical experiments). Furthermore, the biased case, when the oracle is no

    Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization

    No full text
    The goal of this thesis is to extend the analysis and the scope of first-order methods of smooth convex optimization. We consider three challenging difficulties: inexact first-order information, lack of smoothness and presence of linear constraints. When used with inexact information, we show that the Gradient Method (GM) is slow but robust, whereas the Fast Gradient Method (FGM) is fast but sensitive to errors. This trade-off between speed and sensitivity to errors is unavoidable: the faster a first-order method is, the worse its robustness must be. Between the existing methods, we develop a novel scheme, the Intermediate Gradient Method (IGM), which seeks an optimal compromise between speed and robustness and significantly accelerates the generation of accurate solutions. We also show how much strong convexity and stochastic first-order information can decrease the sensitivity to errors of first-order methods. When the objective function is not as smooth as desired, we show that first-order methods initially developed for smooth problems can still be applied. This result breaks the wall between smooth and nonsmooth optimization. In particular, FGM can be seen as a universal optimal first-order method. When linear constraints prevent the use of usual first-order methods, we propose a new approach, the double smoothing technique. We dualize the linear constraints, transform the dual function into a smooth strongly convex function and apply FGM. This technique efficiently generates nearly optimal and feasible primal solutions with accuracy guarantees.(FSA 3) -- UCL, 201
    corecore