170,790 research outputs found

    A polynomial training algorithm for calculating perceptrons of optimal stability

    Full text link
    Recomi (REpeated COrrelation Matrix Inversion) is a polynomially fast algorithm for searching optimally stable solutions of the perceptron learning problem. For random unbiased and biased patterns it is shown that the algorithm is able to find optimal solutions, if any exist, in at worst O(N^4) floating point operations. Even beyond the critical storage capacity alpha_c the algorithm is able to find locally stable solutions (with negative stability) at the same speed. There are no divergent time scales in the learning process. A full proof of convergence cannot yet be given, only major constituents of a proof are shown.Comment: 11 pages, Latex, 4 EPS figure

    Biases in human behavior

    Get PDF
    The paper shows that biases in individual’s decision-making may result from the process of mental editing by which subjects produce a “representation” of the decision problem. During this process, individuals make systematic use of default classifications in order to reduce the short-term memory load and the complexity of symbolic manipulation. The result is the construction of an imperfect mental representation of the problem that nevertheless has the advantage of being simple, and yielding “satisficing” decisions. The imperfection origins in a trade-off that exists between the simplicity of representation of a strategy and his efficiency. To obtain simplicity, the strategy’s rules have to be memorized and represented with some degree of abstraction, that allow to drastically reduce their number. Raising the level of abstraction with which a strategy’s rule is represented, means to extend the domain of validity of the rule beyond the field in which the rule has been experimented, and may therefore induce to include unintentionally domains in which the rule is inefficient. Therefore the rise of errors in the mental representation of a problem may be the "natural" effect of the categorization and the identification of the building blocks of a strategy. The biases may be persistent and give rise to lock-in effect, in which individuals remain trapped in sub-optimal strategies, as it is proved by experimental results on stability of sub-optimal strategies in games like Target The Two. To understand why sub-optimal strategies, that embody errors, are locally stable, i.e. cannot be improved by small changes in the rules, it is considered Kauffman’ NK model, because, among other properties, it shows that if there are interdependencies among the rules of a system, than the system admits many sub-optimal solutions that are locally stable, i.e. cannot be improved by simple mutations. But the fitness function in NK model is a random one, while in our context it is more reasonable to define the fitness of a strategy as efficiency of the program. If we introduce this kind of fitness, then the stability properties of the NK model do not hold any longer: the paper shows that while the elementary statements of a strategy are interdependent, it is possible to achieve an optimal configuration of the strategy via mutations and in consequence the sub-optimal solutions are not locally stable under mutations. The paper therefore provides a different explanation of the existence and stability of suboptimal strategies, based on the difficulty to redefine the sub-problems that constitute the building blocks of the problem’s representation

    Mean-field limit for collective behavior models with sharp sensitivity regions

    Full text link
    We rigorously show the mean-field limit for a large class of swarming individual based models with local sharp sensitivity regions. For instance, these models include nonlocal repulsive-attractive forces locally averaged over sharp vision cones and Cucker-Smale interactions with discontinuous communication weights. We construct global-in-time defined notion of solutions through a differential inclusion system corresponding to the particle descriptions. We estimate the error between the solutions to the differential inclusion system and weak solutions to the expected limiting kinetic equation by employing tools from optimal transport theory. Quantitative bounds on the expansion of the 1-Wasserstein distance along flows based on a weak-strong stability estimate are obtained. We also provide different examples of realistic sensitivity sets satisfying the assumptions of our main results

    Full stability of locally optimal solutions in second-order cone programs

    Get PDF
    The paper presents complete characterizations of Lipschitzian full stability of locally optimal solutions to second-order cone programs (SOCPs) expressed entirely in terms of their initial data. These characterizations are obtained via appropriate versions of the quadratic growth and strong second-order sufficient conditions under the corresponding constraint qualifications. We also establish close relationships between full stability of local minimizers for SOCPs and strong regularity of the associated generalized equations at nondegenerate points. Our approach is mainly based on advanced tools of second-order variational analysis and generalized differentiation

    Robust receding horizon control for convex dynamics and bounded disturbances

    Full text link
    A novel robust nonlinear model predictive control strategy is proposed for systems with convex dynamics and convex constraints. Using a sequential convex approximation approach, the scheme constructs tubes that contain predicted trajectories, accounting for approximation errors and disturbances, and guaranteeing constraint satisfaction. An optimal control problem is solved as a sequence of convex programs, without the need of pre-computed error bounds. We develop the scheme initially in the absence of external disturbances and show that the proposed nominal approach is non-conservative, with the solutions of successive convex programs converging to a locally optimal solution for the original optimal control problem. We extend the approach to the case of additive disturbances using a novel strategy for selecting linearization points and seed trajectories. As a result we formulate a robust receding horizon strategy with guarantees of recursive feasibility and stability of the closed-loop system

    On the practically interesting instances of MAXCUT

    Get PDF
    The complexity of a computational problem is traditionally quantified based on the hardness of its worst case. This approach has many advantages and has led to a deep and beautiful theory. However, from the practical perspective, this leaves much to be desired. In application areas, practically interesting instances very often occupy just a tiny part of an algorithm's space of instances, and the vast majority of instances are simply irrelevant. Addressing these issues is a major challenge for theoretical computer science which may make theory more relevant to the practice of computer science. Following Bilu and Linial, we apply this perspective to MAXCUT, viewed as a clustering problem. Using a variety of techniques, we investigate practically interesting instances of this problem. Specifically, we show how to solve in polynomial time distinguished, metric, expanding and dense instances of MAXCUT under mild stability assumptions. In particular, (1+ϵ)(1+\epsilon)-stability (which is optimal) suffices for metric and dense MAXCUT. We also show how to solve in polynomial time Ω(n)\Omega(\sqrt{n})-stable instances of MAXCUT, substantially improving the best previously known result
    • …
    corecore