48 research outputs found

    A sensitivity analysis of a class of semi-coercive variational inequalities using recession tools

    Get PDF
    International audienceUsing the recession analysis we study necessary and sufficient conditions for the existence and the stability of a finite semi-coercive variational inequality with respect to data perturbation. Some applications of the abstract results in mechanics and in electronic circuits involving devices like ideal diode and practical diode are discussed

    Qualitative Stability of a Class of Non-Monotone Variational Inclusions. Application in Electronics

    Get PDF
    International audienceThe main concern of this paper is to investigate some stability properties (namely Aubin property and isolated calmness) of a special non-monotone variational inclusion. We provide a characterization of these properties in terms of the problem data and show their importance for the design of electrical circuits involving nonsmooth and non-monotone electronic devices like DIAC (DIode Alternating Current). Circuits with other devices like SCR (Silicon Controlled Rectifiers), Zener diodes, thyristors, varactors and transistors can be analyzed in the same way

    Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications

    Get PDF
    Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comprend trois parties. Dans la première partie, nous apportons une vue d’ensemble des bornes d’erreur et les premières briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inégalité du gradient de Lojasiewicz, en mettant en relation cette inégalité avec les bornes d’erreur. Dans la seconde partie, en usant de l’inégalité de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexité des m´méthodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces résultats introduisent une méthodologie simple : trouver une borne d’erreur, calculer la fonction KL désingularisante quand c’est possible, identifier les constantes pertinentes dans la méthode de descente, et puis calculer la complexité en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous étendons la méthode extragradient pour minimiser la somme de deux fonctions, la première étant lisse et la seconde convexe. Sous l’hypothèse de l’inégalité KL, nous montrons que la suite produite par la méthode extragradient converge vers un point critique de ce problème et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la méthode de gradient. De plus, nous montrons que notre complexité de la seconde partie peut être appliquée à cette méthode. Considérer la méthode extragradient est l’occasion de d´écrire la recherche linéaire exacte pour les méthodes de décomposition proximales. Nous donnons des détails pour l’implémentation de ce programme pour le problème des moindres carrés avec régularisation ℓ1 et nous donnons des résultats numériques qui suggèrent que combiner des méthodes non-accélérées avec la recherche linéaire exacte peut être un choix performant.This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice

    Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications

    Get PDF
    Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comprend trois parties. Dans la première partie, nous apportons une vue d’ensemble des bornes d’erreur et les premières briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inégalité du gradient de Lojasiewicz, en mettant en relation cette inégalité avec les bornes d’erreur. Dans la seconde partie, en usant de l’inégalité de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexité des m´méthodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces résultats introduisent une méthodologie simple : trouver une borne d’erreur, calculer la fonction KL désingularisante quand c’est possible, identifier les constantes pertinentes dans la méthode de descente, et puis calculer la complexité en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous étendons la méthode extragradient pour minimiser la somme de deux fonctions, la première étant lisse et la seconde convexe. Sous l’hypothèse de l’inégalité KL, nous montrons que la suite produite par la méthode extragradient converge vers un point critique de ce problème et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la méthode de gradient. De plus, nous montrons que notre complexité de la seconde partie peut être appliquée à cette méthode. Considérer la méthode extragradient est l’occasion de d´écrire la recherche linéaire exacte pour les méthodes de décomposition proximales. Nous donnons des détails pour l’implémentation de ce programme pour le problème des moindres carrés avec régularisation ℓ1 et nous donnons des résultats numériques qui suggèrent que combiner des méthodes non-accélérées avec la recherche linéaire exacte peut être un choix performant.This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice

    On the analysis of stochastic optimization and variational inequality problems

    Get PDF
    Uncertainty has a tremendous impact on decision making. The more connected we get, it seems, the more sources of uncertainty we unfold. For example, uncertainty in the parameters of price and cost functions in power, transportation, communication and financial systems have stemmed from the way these networked systems operate and also how they interact with one another. Uncertainty influences the design, regulation and decisions of participants in several engineered systems like the financial markets, electricity markets, commodity markets, wired and wireless networks, all of which are ubiquitous. This poses many interesting questions in areas of understanding uncertainty (modeling) and dealing with uncertainty (decision making). This dissertation focuses on answering a set of fundamental questions that pertain to dealing with uncertainty arising in three major problem classes: [(1)] Convex Nash games; [(2)] Variational inequality problems and complementarity problems; [(3)] Hierarchical risk management problems in financial networks. Accordingly, this dissertation considers the analysis of a broad class of stochastic optimization and variational inequality problems complicated by uncertainty and nonsmoothness of objective functions. Nash games and variational inequalities have assumed practical relevance in industry and business settings because they are natural models for many real-world applications. Nash games arise naturally in modeling a range of equilibrium problems in power markets, communication networks, market-based allocation of resources etc. where as variational inequality problems allow for modeling frictional contact problems, traffic equilibrium problems etc. Incorporating uncertainty into convex Nash games leads us to stochastic Nash games. Despite the relevance of stochastic generalizations of Nash games and variational inequalities, answering fundamental questions regarding existence of equilibria in stochastic regimes has proved to be a challenge. Amongst other reasons, the main challenge arises from the nonlinearity arising from the presence of the expectation operator. Despite the rich literature in deterministic settings, direct application of deterministic results to stochastic regimes is not straightforward. The first part of this dissertation explores such fundamental questions in stochastic Nash games and variational inequality problems. Instead of directly using the deterministic results, by leveraging Lebesgue convergence theorems we are able to develop a tractable framework for analyzing problems in stochastic regimes over a continuous probability space. The benefit of this approach is that the framework does not rely on evaluation of the expectation operator to provide existence guarantees, thus making it amenable to tractable use. We extend the above framework to incorporate nonsmoothness of payoff functions as well as allow for stochastic constraints in models, all of which are important in practical settings. The second part of this dissertation extends the above framework to generalizations of variational inequality problems and complementarity problems. In particular, we develop a set of almost-sure sufficiency conditions for stochastic variational inequality problems with single-valued and multi-valued mappings. We extend these statements to quasi-variational regimes as well as to stochastic complementarity problems. The applicability of these results is demonstrated in analysis of risk-averse stochastic Nash games used in Nash-Cournot production distribution models in power markets by recasting the problem as a stochastic quasi-variational inequality problem and in Nash-Cournot games with piecewise smooth price functions by modeling this problem as a stochastic complementarity problem. The third part of this dissertation pertains to hierarchical problems in financial risk management. In the financial industry, risk has been traditionally managed by the imposition of value-at-risk or VaR constraints on portfolio risk exposure. Motivated by recent events in the financial industry, we examine the role that risk-seeking traders play in the accumulation of large and possibly infinite risk. We proceed to show that when traders employ a conditional value-at-risk (CVaR) metric, much can be said by studying the interaction between value at risk (VaR) (a non-coherent risk measure) and conditional value at risk CVaR (a coherent risk measure based on VaR). Resolving this question requires characterizing the optimal value of the associated stochastic, and possibly nonconvex, optimization problem, often a challenging problem. Our study makes two sets of contributions. First, under general asset distributions on a compact support, traders accumulate finite risk with magnitude of the order of the upper bound of this support. Second, when the supports are unbounded, under relatively mild assumptions, such traders can take on an unbounded amount of risk despite abiding by this VaR threshold. In short, VaR thresholds may be inadequate in guarding against financial ruin
    corecore