43 research outputs found
How to project onto extended second order cones
The extended second order cones were introduced by S. Z. N\'emeth and G.
Zhang in [S. Z. N\'emeth and G. Zhang. Extended Lorentz cones and variational
inequalities on cylinders. J. Optim. Theory Appl., 168(3):756-768, 2016] for
solving mixed complementarity problems and variational inequalities on
cylinders. R. Sznajder in [R. Sznajder. The Lyapunov rank of extended second
order cones. Journal of Global Optimization, 66(3):585-593, 2016] determined
the automorphism groups and the Lyapunov or bilinearity ranks of these cones.
S. Z. N\'emeth and G. Zhang in [S.Z. N\'emeth and G. Zhang. Positive operators
of Extended Lorentz cones. arXiv:1608.07455v2, 2016] found both necessary
conditions and sufficient conditions for a linear operator to be a positive
operator of an extended second order cone. This note will give formulas for
projecting onto the extended second order cones. In the most general case the
formula will depend on a piecewise linear equation for one real variable which
will be solved by using numerical methods
Complementarity problems, variational inequalities and extended lorentz cones
In this thesis, we introduced the concept of extended Lorentz cones. We discussed the solvability of variational inequalities and complementarity problems associated with an unrelated closed convex cone. This cone does not have to be an isotone projection cone. We showed that the solution of variational inequalities and complementarity problems can be reached as a limit of a sequence defined in an ordered space which is ordered by extended Lorentz cone. Moreover, we applied our results in game theory and conic optimization problems. We also discussed the positive operators. We showed necessary
and sufficient conditions under which a linear operator is a positive operator of extended Lorentz cone. We also showed sufficient and necessary conditions under which a linear operator in a specific form is a positive operator
How to project onto the monotone extended second order cone
This paper introduce the monotone extended second order cone (MESOC), which
is related to the monotone cone and the Lorentz cone. Some properties of MESOC
are presented and its dual cone is computed. Formulas for projecting onto MESOC
are also presented. In the most general case the formula for projecting onto
MESOC depends on an equation for one real variable.Comment: 9 page
An Analytic Approach to the Structure and Composition of General Learning Problems
Gowers presents, in his 2000 essay "The Two Cultures of Mathematics", two kinds of mathematicians he calls the theory-builders and problem-solvers. Of course both kinds of research are important; theory building may directly lead to solutions to problems, and by studying individual problems one uncovers the general structures of problems themselves. However, referencing a remark of Atiyah, Gowers observes that because so much research is produced, the results that can be ``organised coherently and explained economically'' will be the ones that last. Unlike mathematics, the field of machine learning abounds in problem-solvers --- this is wonderful as it leads to a large number of problems being solved --- but it is with regard to the point of Gowers that we are motivated to develop an appropriately general analytic framework to study machine learning problems themselves.
To do this we first locate and develop the appropriate analytic objects to study. Chapter 2 recalls some concepts and definitions from the theory of topological vector spaces. In particular, the families of radiant and co-radiant sets and dualities. In Chapter 4 we will need generalisations of a variety of existing results on these families, and these are presented in Chapter 3.
Classically a machine learning problem involves four quantities: an outcome space, a family of predictions (or model), a loss function, and a probability distribution. If the loss function is sufficiently general we can combine it with the set of predictions to form a set of real functions, which under very general assumptions, turns out to be closed, convex, and in particular, co-radiant. With the machinery of the previous two chapters in place, in Chapter 4 we lay out the foundations for an analytic theory of the classical machine learning problem, including a general analysis of link functions, by which we may rewrite almost any loss function as a scoring rule; a discussion of scoring rules and their properisation; and using the co-radiant results from Chapter 3 in particular, a theory of prediction aggregation.
Chapters 5 and 6 develop results inspired by and related to adversarial learning. Chapter 5 develops a theory of boosted density estimation with strong convergence guarantees, where density updates are computed by training a classifier, and Chapter 6 uses the theory of optimal transport to formulate a robust Bayes minimisation problem, in which we develop a universal theory of regularisation and deliver new strong results for the problem of adversarial learning
Generalized Nash equilibrium problems with partial differential operators: Theory, algorithms, and risk aversion
PDE-constrained (generalized) Nash equilibrium problems (GNEPs) are considered in a deterministic setting as well as under uncertainty. This includes a study of deterministic GNEPs with nonlinear and/or multivalued operator equations as forward problems and PDE-constrained GNEPs with uncertain data. The deterministic nonlinear problems are analyzed using the theory of generalized convexity for set-valued operators, and a variational approximation approach is proposed. The stochastic setting includes a detailed overview of the recently developed theory and algorithms for risk-averse PDE-constrained optimization problems. These new results open the way to a rigorous study of stochastic PDE-constrained GNEPs
Generalized Nash equilibrium problems with partial differential operators: Theory, algorithms, and risk aversion
PDE-constrained (generalized) Nash equilibrium problems (GNEPs) are considered in a deterministic setting as well as under uncertainty. This includes a study of deterministic GNEPs with nonlinear and/or multivalued operator equations as forward problems and PDE-constrained GNEPs with uncertain data. The deterministic nonlinear problems are analyzed using the theory of generalized convexity for set-valued operators, and a variational approximation approach is proposed. The stochastic setting includes a detailed overview of the recently developed theory and algorithms for risk-averse PDE-constrained optimization problems. These new results open the way to a rigorous study of stochastic PDE-constrained GNEPs
Quantum marginals, faces, and coatoms
Many problems of quantum information theory rely on the set of quantum
marginals. A precise knowledge of the faces of this convex set is necessary,
for example, in the reconstruction of states from their marginals or in the
evaluation of complexity measures of many-body systems. Yet, even the two-body
marginals of just three qubits were only described in part. Here, we propose an
experimental method to search for the coatoms in the lattice of exposed faces
of the convex set of quantum marginals. The method is based on sampling from
the extreme points of the dual spectrahedron. We provide an algebraic
certificate of correctness, employing ground projectors of local Hamiltonians.
Using this method, we present an explicit family of coatoms of rank five in the
lattice of ground projectors of two-local three-qubit Hamiltonians (the rank is
always six for bits). This family describes a family of coatoms in the lattice
of exposed faces of the convex set of two-body marginals of three qubits.
Besides introducing the experimental method, we show that the support sets of
probability distributions that factor are the ground projectors of
frustration-free Hamiltonians in the commutative setting. We also discuss
nonexposed points of the set of marginals.Comment: 27 pages, comments are welcome