687 research outputs found

    Reduced Memory Footprint in Multiparametric Quadratic Programming by Exploiting Low Rank Structure

    Full text link
    In multiparametric programming an optimization problem which is dependent on a parameter vector is solved parametrically. In control, multiparametric quadratic programming (mp-QP) problems have become increasingly important since the optimization problem arising in Model Predictive Control (MPC) can be cast as an mp-QP problem, which is referred to as explicit MPC. One of the main limitations with mp-QP and explicit MPC is the amount of memory required to store the parametric solution and the critical regions. In this paper, a method for exploiting low rank structure in the parametric solution of an mp-QP problem in order to reduce the required memory is introduced. The method is based on ideas similar to what is done to exploit low rank modifications in generic QP solvers, but is here applied to mp-QP problems to save memory. The proposed method has been evaluated experimentally, and for some examples of relevant problems the relative memory reduction is an order of magnitude compared to storing the full parametric solution and critical regions

    NMPC in Active Subspaces: Dimensionality Reduction with Recursive Feasibility Guarantees

    Full text link
    Dimensionality reduction of decision variables is a practical and classic method to reduce the computational burden in linear and Nonlinear Model Predictive Control (NMPC). Available results range from early move-blocking ideas to singular-value decomposition. For schemes more complex than move-blocking it is seemingly not straightforward to guarantee recursive feasibility of the receding-horizon optimization. Decomposing the space of decision variables related to the inputs into active and inactive complements, this paper proposes a general framework for effective feasibility-preserving dimensionality reduction in NMPC. We show how -- independently of the actual choice of the subspaces -- recursive feasibility can be established. Moreover, we propose the use of global sensitivity analysis to construct the active subspace in data-driven fashion based on user-defined criteria. Numerical examples illustrate the efficacy of the proposed scheme. Specifically, for a chemical reactor we obtain a significant reduction by factor 20−4020-40 at a closed-loop performance decay of less than 0.05%0.05\%.Comment: 10 page

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    A mixed-precision RISC-V processor for extreme-edge DNN inference

    Get PDF
    Low bit-width Quantized Neural Networks (QNNs) enable deployment of complex machine learning models on constrained devices such as microcontrollers (MCUs) by reducing their memory footprint. Fine-grained asymmetric quantization (i.e., different bit-widths assigned to weights and activations on a tensor-by-tensor basis) is a particularly interesting scheme to maximize accuracy under a tight memory constraint. However, the lack of sub-byte instruction set architecture (ISA) support in SoA microprocessors makes it hard to fully exploit this extreme quantization paradigm in embedded MCUs. Support for sub-byte and asymmetric QNNs would require many precision formats and an exorbitant amount of opcode space. In this work, we attack this problem with status-based SIMD instructions: rather than encoding precision explicitly, each operand's precision is set dynamically in a core status register. We propose a novel RISC-V ISA core MPIC (Mixed Precision Inference Core) based on the open-source RI5CY core. Our approach enables full support for mixed-precision QNN inference with 292 different combinations of operands at 16-, 8-, 4-and 2-bit precision, without adding any extra opcode or increasing the complexity of the decode stage. Our results show that MPIC improves both performance and energy efficiency by a factor of 1.1-4.9x when compared to software-based mixed-precision on RI5CY; with respect to commercially available Cortex-M4 and M7 microcontrollers, it delivers 3.6-11.7x better performance and 41-155x higher efficiency

    Algorithms and Methods for High-Performance Model Predictive Control

    Get PDF

    Scalable optimization algorithms for recommender systems

    Get PDF
    Recommender systems have now gained significant popularity and been widely used in many e-commerce applications. Predicting user preferences is a key step to providing high quality recommendations. In practice, however, suggestions made to users must not only consider user preferences in isolation; a good recommendation engine also needs to account for certain constraints. For instance, an online video rental that suggests multimedia items (e.g., DVDs) to its customers should consider the availability of DVDs in stock to reduce customer waiting times for accepted recommendations. Moreover, every user should receive a small but sufficient number of suggestions that the user is likely to be interested in. This thesis aims to develop and implement scalable optimization algorithms that can be used (but are not restricted) to generate recommendations satisfying certain objectives and constraints like the ones above. State-of-the-art approaches lack efficiency and/or scalability in coping with large real-world instances, which may involve millions of users and items. First, we study large-scale matrix completion in the context of collaborative filtering in recommender systems. For such problems, we propose a set of novel shared-nothing algorithms which are designed to run on a small cluster of commodity nodes and outperform alternative approaches in terms of efficiency, scalability, and memory footprint. Next, we view our recommendation task as a generalized matching problem, and propose the first distributed solution for solving such problems at scale. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment) and has strong approximation guarantees. Our matching algorithm relies on linear programming. To this end, we present an efficient distributed approximation algorithm for mixed packing-covering linear programs, a simple but expressive subclass of linear programs. Our approximation algorithm requires a poly-logarithmic number of passes over the input, is simple, and well-suited for parallel processing on GPUs, in shared-memory architectures, as well as on a small cluster of commodity nodes.Empfehlungssysteme haben eine beachtliche PopularitĂ€t erreicht und werden in zahlreichen E-Commerce Anwendungen eingesetzt. Entscheidend fĂŒr die Generierung hochqualitativer Empfehlungen ist die Vorhersage von NutzerprĂ€ferenzen. Jedoch sollten in der Praxis nicht nur VorschlĂ€ge auf Basis von NutzerprĂ€ferenzen gegeben werden, sondern gute Empfehlungssysteme mĂŒssen auch bestimmte Nebenbedingungen berĂŒcksichtigen. Zum Beispiel sollten online Videoverleihfirmen, welche ihren Kunden multimediale Produkte (z.B. DVDs) vorschlagen, die VerfĂŒgbarkeit von vorrĂ€tigen DVDs beachten, um die Wartezeit der Kunden fĂŒr angenommene Empfehlungen zu reduzieren. DarĂŒber hinaus sollte jeder Kunde eine kleine, aber ausreichende Anzahl an VorschlĂ€gen erhalten, an denen er interessiert sein könnte. Diese Arbeit strebt an skalierbare Optimierungsalgorithmen zu entwickeln und zu implementieren, die (unter anderem) eingesetzt werden können Empfehlungen zu generieren, welche weitere Zielvorgaben und Restriktionen einhalten. Derzeit existierenden AnsĂ€tzen mangelt es an Effizienz und/oder Skalierbarkeit im Umgang mit sehr großen, durchaus realen DatensĂ€tzen von, beispielsweise Millionen von Nutzern und Produkten. ZunĂ€chst analysieren wir die VervollstĂ€ndigung großskalierter Matrizen im Kontext von kollaborativen Filtern in Empfehlungssystemen. FĂŒr diese Probleme schlagen wir verschiedene neue, verteilte Algorithmen vor, welche konzipiert sind auf einer kleinen Anzahl von gĂ€ngigen Rechnern zu laufen. Zudem können sie alternative AnsĂ€tze hinsichtlich der Effizienz, Skalierbarkeit und benötigten SpeicherkapazitĂ€t ĂŒberragen. Als NĂ€chstes haben wir die Empfehlungsproblematik als ein generalisiertes Zuordnungsproblem betrachtet und schlagen daher die erste verteilte Lösung fĂŒr großskalierte Zuordnungsprobleme vor. Unser Algorithmus funktioniert auf einer kleinen Gruppe von gĂ€ngigen Rechnern (oder in einem MapReduce-Programmierungsmodel) und erzielt gute Approximationsgarantien. Unser Zuordnungsalgorithmus beruht auf linearer Programmierung. Daher prĂ€sentieren wir einen effizienten, verteilten Approximationsalgorithmus fĂŒr vermischte lineare Packungs- und Überdeckungsprobleme, eine einfache aber expressive Unterklasse der linearen Programmierung. Unser Algorithmus benötigt eine polylogarithmische Anzahl an Scans der Eingabedaten. Zudem ist er einfach und sehr gut geeignet fĂŒr eine parallele Verarbeitung mithilfe von Grafikprozessoren, unter einer gemeinsam genutzten Speicherarchitektur sowie auf einer kleinen Gruppe von gĂ€ngigen Rechnern
    • 

    corecore