12 research outputs found

    The Power of Localization for Efficiently Learning Linear Separators with Noise

    Full text link
    We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider both the malicious noise model and the adversarial label noise model. For malicious noise, where the adversary can corrupt both the label and the features, we provide a polynomial-time algorithm for learning linear separators in ℜd\Re^d under isotropic log-concave distributions that can tolerate a nearly information-theoretically optimal noise rate of η=Ω(ϵ)\eta = \Omega(\epsilon). For the adversarial label noise model, where the distribution over the feature vectors is unchanged, and the overall probability of a noisy label is constrained to be at most η\eta, we also give a polynomial-time algorithm for learning linear separators in ℜd\Re^d under isotropic log-concave distributions that can handle a noise rate of η=Ω(ϵ)\eta = \Omega\left(\epsilon\right). We show that, in the active learning model, our algorithms achieve a label complexity whose dependence on the error parameter ϵ\epsilon is polylogarithmic. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise.Comment: Contains improved label complexity analysis communicated to us by Steve Hannek

    Aggregations of quadratic inequalities and hidden hyperplane convexity

    Full text link
    We study properties of the convex hull of a set SS described by quadratic inequalities. A simple way of generating inequalities valid on SS is to to take a nonnegative linear combinations of the defining inequalities of SS. We call such inequalities aggregations. Special aggregations naturally contain the convex hull of SS, and we give sufficient conditions for such aggregations to define the convex hull. We introduce the notion of hidden hyperplane convexity (HHC), which is related to the classical notion of hidden convexity of quadratic maps. We show that if the quadratic map associated with SS satisfies HHC, then the convex hull of SS is defined by special aggregations. To the best of our knowledge, this result generalizes all known results regarding aggregations defining convex hulls. Using this sufficient condition, we are able to recognize previously unknown classes of sets where aggregations lead to convex hull. We show that the condition known as positive definite linear combination together with hidden hyerplane convexity is a sufficient condition for finitely many aggregations to define the convex hull. All the above results are for sets defined using open quadratic inequalities. For closed quadratic inequalities, we prove a new result regarding aggregations giving the convex hull, without topological assumptions on SS.Comment: 26 pages, 3 figure

    A trust region algorithm for heterogeneous multiobjective optimization

    Get PDF
    This paper presents a new trust region method for multiobjective heterogeneous optimization problems. One of the objective functions is an expensive black-box function, for example given by a time-consuming simulation. For this function derivative information cannot be used and the computation of function values involves high computational effort. The other objective functions are given analytically and derivatives can easily be computed. The method uses the basic trust region approach by restricting the computations in every iteration to a local area and replacing the objective functions by suitable models. The search direction is generated in the image space by using local ideal points. It is proved that the presented algorithm converges to a Pareto critical point. Numerical results are presented and compared to another algorithm

    (Global) Optimization: Historical notes and recent developments

    Get PDF

    Aspects of quadratic optimization - nonconvexity, uncertainty, and applications

    Get PDF
    Quadratic Optimization (QO) has been studied extensively in the literature due to its application in real-life problems. This thesis deals with two complicated aspects of QO problems, namely nonconvexity and uncertainty. A nonconvex QO problem is intractable in general. The first part of this thesis presents methods to approximate a nonconvex QP problem. Another important aspect of a QO problem is taking into account uncertainties in the parameters since they are mostly approximated/estimated from data. The second part of the thesis contains analyses of two methods that deal with uncertainties in a convex QO problem, namely Static and Adjustable Robust Optimization problems. To test the methods proposed in this thesis, the following three real-life applications have been considered: pooling problem, portfolio problem, and norm approximation problem
    corecore