60,549 research outputs found

    Improving spatial-simultaneous working memory in Down syndrome: effect of a training program led by parents instead of an expert

    Get PDF
    Recent studies have suggested that the visuospatial component of working memory (WM) is selectively impaired in individuals with Down syndrome (DS), the deficit relating specifically to the spatial-simultaneous component, which is involved when stimuli are presented simultaneously. The present study aimed to analyze the effects of a computer-based program for training the spatial-simultaneous component of WM in terms of: specific effects (on spatial-simultaneous WM tasks); near and far transfer effects (on spatial-sequential and visuospatial abilities, and everyday memory tasks); and maintenance effects (1 month after the training). A comparison was drawn between the results obtained when the training was led by parents at home as opposed to an expert in psychology. Thirty-nine children and adolescents with DS were allocated to one of two groups: the training was administered by an expert in one, and by appropriately instructed parents in the other. The training was administered individually twice a week for a month, in eight sessions lasting approximately 30 min each. Our participants' performance improved after the training, and these results were maintained a month later in both groups. Overall, our findings suggest that spatial-simultaneous WM performance can be improved, obtaining specific and transfer gains; above all, it seems that, with adequate support, parents could effectively administer a WM training to their child

    Simplification Methods for Sum-of-Squares Programs

    Full text link
    A sum-of-squares is a polynomial that can be expressed as a sum of squares of other polynomials. Determining if a sum-of-squares decomposition exists for a given polynomial is equivalent to a linear matrix inequality feasibility problem. The computation required to solve the feasibility problem depends on the number of monomials used in the decomposition. The Newton polytope is a method to prune unnecessary monomials from the decomposition. This method requires the construction of a convex hull and this can be time consuming for polynomials with many terms. This paper presents a new algorithm for removing monomials based on a simple property of positive semidefinite matrices. It returns a set of monomials that is never larger than the set returned by the Newton polytope method and, for some polynomials, is a strictly smaller set. Moreover, the algorithm takes significantly less computation than the convex hull construction. This algorithm is then extended to a more general simplification method for sum-of-squares programming.Comment: 6 pages, 2 figure

    Neural Lyapunov Control

    Full text link
    We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.Comment: NeurIPS 201

    Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

    Full text link
    It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a large number of variables and degree. In this paper, we review SOS optimization techniques and present two new methods for improving their computational efficiency. The first method leverages the sparsity of the underlying SDP to obtain computational speed-ups. Further improvements can be obtained if the coefficients of the polynomials that describe the problem have a particular sparsity pattern, called chordal sparsity. The second method bypasses semidefinite programming altogether and relies instead on solving a sequence of more tractable convex programs, namely linear and second order cone programs. This opens up the question as to how well one can approximate the cone of SOS polynomials by second order representable cones. In the last part of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201

    Domain Decomposition for Stochastic Optimal Control

    Full text link
    This work proposes a method for solving linear stochastic optimal control (SOC) problems using sum of squares and semidefinite programming. Previous work had used polynomial optimization to approximate the value function, requiring a high polynomial degree to capture local phenomena. To improve the scalability of the method to problems of interest, a domain decomposition scheme is presented. By using local approximations, lower degree polynomials become sufficient, and both local and global properties of the value function are captured. The domain of the problem is split into a non-overlapping partition, with added constraints ensuring C1C^1 continuity. The Alternating Direction Method of Multipliers (ADMM) is used to optimize over each domain in parallel and ensure convergence on the boundaries of the partitions. This results in improved conditioning of the problem and allows for much larger and more complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201
    • …
    corecore