31,558 research outputs found

    Investigation of cyclicity of kinematic resolution methods for serial and parallel planar manipulators

    Get PDF
    Kinematic redundancy of manipulators is a well-understood topic, and various methods were developed for the redundancy resolution in order to solve the inverse kinematics problem, at least for serial manipulators. An important question, with high practical relevance, is whether the inverse kinematics solution is cyclic, i.e., whether the redundancy solution leads to a closed path in joint space as a solution of a closed path in task space. This paper investigates the cyclicity property of two widely used redundancy resolution methods, namely the projected gradient method (PGM) and the augmented Jacobian method (AJM), by means of examples. Both methods determine solutions that minimize an objective function, and from an application point of view, the sensitivity of the methods on the initial configuration is crucial. Numerical results are reported for redundant serial robotic arms and for redundant parallel kinematic manipulators. While the AJM is known to be cyclic, it turns out that also the PGM exhibits cyclicity. However, only the PGM converges to the local optimum of the objective function when starting from an initial configuration of the cyclic trajector

    Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence

    Full text link
    Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper we explore all three of these building blocks and propose variations for each that can lead to significantly faster BCD methods. We (i) propose new greedy block-selection strategies that guarantee more progress per iteration than the Gauss-Southwell rule; (ii) explore practical issues like how to implement the new rules when using "variable" blocks; (iii) explore the use of message-passing to compute matrix or Newton updates efficiently on huge blocks for problems with a sparse dependency between variables; and (iv) consider optimal active manifold identification, which leads to bounds on the "active set complexity" of BCD methods and leads to superlinear convergence for certain problems with sparse solutions (and in some cases finite termination at an optimal solution). We support all of our findings with numerical results for the classic machine learning problems of least squares, logistic regression, multi-class logistic regression, label propagation, and L1-regularization

    Event-triggered state observers for sparse sensor noise/attacks

    Get PDF
    This paper describes two algorithms for state reconstruction from sensor measurements that are corrupted with sparse, but otherwise arbitrary, 'noise.' These results are motivated by the need to secure cyber-physical systems against a malicious adversary that can arbitrarily corrupt sensor measurements. The first algorithm reconstructs the state from a batch of sensor measurements while the second algorithm is able to incorporate new measurements as they become available, in the spirit of a Luenberger observer. A distinguishing point of these algorithms is the use of event-triggered techniques to improve the computational performance of the proposed algorithms

    On the filtering effect of iterative regularization algorithms for linear least-squares problems

    Full text link
    Many real-world applications are addressed through a linear least-squares problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an ill-conditioned model and real data, the need of a regularized solution instead of the least-squares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we want to analyze some classical and recent gradient approaches for the linear least-squares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and non-negative constraints in recovering the correct filters of the solution
    • …
    corecore