196 research outputs found

    Politiek

    No full text

    Exploring the effects of robotic design on learning and neural control

    Full text link
    The ongoing deep learning revolution has allowed computers to outclass humans in various games and perceive features imperceptible to humans during classification tasks. Current machine learning techniques have clearly distinguished themselves in specialized tasks. However, we have yet to see robots capable of performing multiple tasks at an expert level. Most work in this field is focused on the development of more sophisticated learning algorithms for a robot's controller given a largely static and presupposed robotic design. By focusing on the development of robotic bodies, rather than neural controllers, I have discovered that robots can be designed such that they overcome many of the current pitfalls encountered by neural controllers in multitask settings. Through this discovery, I also present novel metrics to explicitly measure the learning ability of a robotic design and its resistance to common problems such as catastrophic interference. Traditionally, the physical robot design requires human engineers to plan every aspect of the system, which is expensive and often relies on human intuition. In contrast, within the field of evolutionary robotics, evolutionary algorithms are used to automatically create optimized designs, however, such designs are often still limited in their ability to perform in a multitask setting. The metrics created and presented here give a novel path to automated design that allow evolved robots to synergize with their controller to improve the computational efficiency of their learning while overcoming catastrophic interference. Overall, this dissertation intimates the ability to automatically design robots that are more general purpose than current robots and that can perform various tasks while requiring less computation.Comment: arXiv admin note: text overlap with arXiv:2008.0639

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉ¹ with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc

    Large space structures and systems in the space station era: A bibliography with indexes (supplement 04)

    Get PDF
    Bibliographies and abstracts are listed for 1211 reports, articles, and other documents introduced into the NASA scientific and technical information system between 1 Jul. and 30 Dec. 1991. Its purpose is to provide helpful information to the researcher, manager, and designer in technology development and mission design according to system, interactive analysis and design, structural concepts and control systems, electronics, advanced materials, assembly concepts, propulsion, and solar power satellite systems

    Development of Fault Diagnosis and Fault Tolerant Control Algorithms with Application to Unmanned Systems

    Get PDF
    Unmanned vehicles have been increasingly employed in real life. They include unmanned air vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned spacecrafts, and unmanned underwater vehicles (UUVs). Unmanned vehicles like any other autonomous systems need controllers to stabilize and control them. On the other hand unmanned systems might subject to different faults. Detecting a fault, finding the location and severity of it, are crucial for unmanned vehicles. Having enough information about a fault, it is needed to redesign controller based on post fault characteristics of the system. The obtained controlled system in this case can tolerate the fault and may have a better performance. The main focus of this thesis is to develop Fault Detection and Diagnosis (FDD) algorithms, and Fault Tolerant Controllers (FTC) to increase performance, safety and reliability of various missions using unmanned systems. In the field of unmanned ground vehicles, a new kinematical control method has been proposed for the trajectory tracking of nonholonomic Wheeled Mobile Robots (MWRs). It has been experimentally tested on an UGV, called Qbot. A stable leader-follower formation controller for time-varying formation configuration of multiple nonholonomic wheeled mobile robots has also been presented and is examined through computer simulation. In the field of unmanned aerial vehicles, Two-Stage Kalman Filter (TSKF), Adaptive Two-Stage Kalman Filter (ATSKF), and Interacting Multiple Model (IMM) filter were proposed for FDD of the quadrotor helicopter testbed in the presence of actuator faults. As for space missions, an FDD algorithm for the attitude control system of the Japan Canada Joint Collaboration Satellite - Formation Flying (JC2Sat-FF) mission has been developed. The FDD scheme was achieved using an IMM-based FDD algorithm. The efficiency of the FDD algorithm has been shown through simulation results in a nonlinear simulator of the JC2Sat-FF. A fault tolerant fuzzy gain-scheduled PID controller has also been designed for a quadrotor unmanned helicopter in the presence of actuator faults. The developed FDD algorithms and fuzzy controller were evaluated through experimental application to a quadrotor helicopter testbed called Qball-X4

    Efficient algorithms for risk-averse air-ground rendezvous missions

    Get PDF
    Demand for fast and inexpensive parcel deliveries in urban environments has risen considerably in recent years. A framework is envisioned to enforce efficient last-mile delivery in urban environments by leveraging a network of ride-sharing vehicles, where Unmanned Aerial Systems (UASs) drop packages on said vehicles, which then cover the majority of the distance before final aerial delivery. By combining existing networks we show that the range and efficiency of UAS-based delivery logistics are greatly increased. This approach presents many engineering challenges, including the safe rendezvous of both agents: the UAS and the human-operated ground vehicle. This dissertation presents tools that guarantee risk-optimal rendezvous between the two vehicles. We present mechanical and algorithmic tools that achieve this goal. Mechanically, we develop a novel aerial manipulator and controller that improves in-flight stability during the pickup and drop-off of packages. At a higher level and the core of this dissertation, we present planning algorithms that mitigate risks associated with human behavior at the longest time scales. First, we discuss the downfalls of traditional approaches. In aerial manipulation, we show that popular anthropomorphic designs are unsuitable for flying platforms, which we tackle with a combination of lightweight design of a delta-type parallel manipulator, and L1 adaptive control with feedforward. In planning algorithms, we present evidence of erratic driver behavior that can lead to catastrophic failures. Such a failure occurs when the UAS depletes its resource (battery, fuel) and has to crash land on an unplanned location. This is particularly dangerous in urban environments where population density is high, and the probability of harming a person or property in the event of a failure is unsafe. Studies have shown that two types of erratic behavior are common: speed variation and route choice. Speed variation refers to a common disregard for speed limits combined with different levels of comfort per driver. Route choice is conscious, unconscious, or purely random action of deviating from a prescribed route. Route choice uncertainty is high dimensional and complex both in space and time. Dealing with these types of uncertainty is important to many fields, namely traffic flow modeling. The critical difference to our interpretation is that we frame them in a motion planning framework. As such, we assume each driver has an unknown stochastic model for their behavior, a model that we aim to approximate through different methods. We aim to guarantee safety by quantifying motion planning risks associated with erratic human behavior. Only missions that plan on using all of the UAS's resources have inherent risk. We postulate that if we have a high assurance of success, any mission can be made to use more resources and be more efficient for the network by completing its objective faster. Risk management is addressed at three different scales. First, we focus on speed variation. We approach this problem with a combination of risk-averse Model Predictive Control (MPC) and Gaussian Processes. We use risk as a measure of the probability of success, centered around estimated future driver position. Several risk measures are discussed and CVaR is chosen as a robust measure for this problem. Second we address local route choice. This is route uncertainty for a single driver in some region of space. The primary challenge is the loss of gradient for the MPC controller. We extend the previous approach with a cross-entropy stochastic optimization algorithm that separates gradient-based from gradient-free optimization problems within the planner. We show that this approach is effective through a variety of numerical simulations. Lastly, we study a city-wide problem of estimating risk among several available drivers. We use real-world data combined with synthetic experiments and Deep Neural Networks (DNN) to produce an accurate estimator. The main challenges in this approach are threefold: DNN architecture, driver model, and data processing. We found that this learning problem suffers from vanishing gradients and numerous local minima, which we address with modern self-normalization techniques and mean-adjusted CVaR. We show the model's effectiveness in four scenarios of increasing complexity and propose ways of addressing its shortcomings

    Dynamical systems : mechatronics and life sciences

    Get PDF
    Proceedings of the 13th Conference „Dynamical Systems - Theory and Applications" summarize 164 and the Springer Proceedings summarize 60 best papers of university teachers and students, researchers and engineers from whole the world. The papers were chosen by the International Scientific Committee from 315 papers submitted to the conference. The reader thus obtains an overview of the recent developments of dynamical systems and can study the most progressive tendencies in this field of science

    Quantum Algorithms for Solving Hard Constrained Optimization Problems

    Get PDF
    En aquesta investigació, s'han examinat tècniques d'optimització per resoldre problemes de restriccions i s'ha fet un estudi de l'era quàntica i de les empreses líders del mercat, com ara IBM, D-Wave, Google, Xanadu, AWS-Braket i Microsoft. S'ha après sobre la comunitat, les plataformes, l'estat de les investigacions i s'han estudiat els postulats de la mecànica quàntica que serveixen per crear els sistemes i algorismes quàntics més eficients. Per tal de saber si és possible resoldre problemes de Problema de cerca de restriccions (CSP) de manera més eficient amb la computació quàntica, es va definir un escenari perquè tant la computació clàssica com la quàntica tinguessin un bon punt de referència. En primer lloc, la prova de concepte es centra en el problema de programació dels treballadors socials i més tard en el tema de la preparació per lots i la selecció de comandes com a generalització del Problema dels treballadors socials (SWP). El problema de programació dels treballadors socials és una mena de problema d'optimització combinatòria que, en el millor dels casos, es pot resoldre en temps exponencial; veient que el SWP és NP-Hard, proposa fer servir un altre enfoc més enllà de la computació clàssica per a la seva resolució. Avui dia, el focus a la computació quàntica ja no és només per la seva enorme capacitat informàtica sinó també, per l'ús de la seva imperfecció en aquesta era Noisy Intermediate-Scale Quantum (NISQ) per crear un poderós dispositiu d'aprenentatge automàtic que utilitza el principi variacional per resoldre problemes d'optimització en reduir la classe de complexitat. A la tesi es proposa una formulació (quadràtica) per resoldre el problema de l'horari dels treballadors socials de manera eficient utilitzant Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer i ADMM optimizer. La viabilitat quàntica de l'algorisme s'ha modelat en forma QUBO, amb Docplex simulat Cirq, Or-Tools i provat a ordinadors IBMQ. Després d'analitzar els resultats de l'enfocament anterior, es va dissenyar un escenari per resoldre el SWP com a raonament basat en casos (qCBR), tant quànticament com clàssicament. I així poder contribuir amb un algorisme quàntic centrat en la intel·ligència artificial i l'aprenentatge automàtic. El qCBR és una tècnica d’aprenentatge automàtic basada en la resolució de nous problemes que utilitza l’experiència, com ho fan els humans. L'experiència es representa com una memòria de casos que conté qüestions prèviament resoltes i utilitza una tècnica de síntesi per adaptar millor l'experiència al problema nou. A la definició de SWP, si en lloc de pacients es tenen lots de comandes i en lloc de treballadors socials robots mòbils, es generalitza la funció objectiu i les restriccions. Per això, s'ha proposat una prova de concepte i una nova formulació per resoldre els problemes de picking i batching anomenat qRobot. Es va fer una prova de concepte en aquesta part del projecte mitjançant una Raspberry Pi 4 i es va provar la capacitat d'integració de la computació quàntica dins de la robòtica mòbil, amb un dels problemes més demandats en aquest sector industrial: problemes de picking i batching. Es va provar en diferents tecnologies i els resultats van ser prometedors. A més, en cas de necessitat computacional, el robot paral·lelitza part de les operacions en computació híbrida (quàntica + clàssica), accedint a CPU i QPU distribuïts en un núvol públic o privat. A més, s’ha desenvolupat un entorn estable (ARM64) dins del robot (Raspberry) per executar operacions de gradient i altres algorismes quàntics a IBMQ, Amazon Braket (D-Wave) i Pennylane de forma local o remota. Per millorar el temps d’execució dels algorismes variacionals en aquesta era NISQ i la següent, s’ha proposat EVA: un algorisme d’aproximació de Valor Exponencial quàntic. Fins ara, el VQE és el vaixell insígnia de la computació quàntica. Avui dia, a les plataformes líders del mercat de computació quàntica al núvol, el cost de l'experimentació dels circuits quàntics és proporcional al nombre de circuits que s'executen en aquestes plataformes. És a dir, amb més circuits més cost. Una de les coses que aconsegueix el VQE, el vaixell insígnia d'aquesta era de pocs qubits, és la poca profunditat en dividir el Hamiltonià en una llista de molts petits circuits (matrius de Pauli). Però aquest mateix fet, fa que simular amb el VQE sigui molt car al núvol. Per aquesta mateixa raó, es va dissenyar EVA per poder calcular el valor esperat amb un únic circuit. Tot i haver respost a la hipòtesi d'aquesta tesis amb tots els estudis realitzats, encara es pot continuar investigant per proposar nous algorismes quàntics per millorar problemes d'optimització.En esta investigación, se han examinado técnicas de optimización para resolver problemas de restricciones y se ha realizado un estudio de la era cuántica y de las empresas lideres del mercado, como IBM, D-Wave, Google, Xanadu, AWS-Braket y Microsoft. Se ha aprendido sobre su comunidad, sus plataformas, el estado de sus investigaciones y se han estudiado los postulados de la mecánica cuántica que sirven para crear los sistemas y algoritmos cuánticos más eficientes. Por tal de saber si es posible resolver problemas de Problema de búsqueda de restricciones (CSP) de manera más eficiente con la computación cuántica, se definió un escenario para que tanto la computación clásica como la cuántica tuvieran un buen punto de referencia. En primer lugar, la prueba de concepto se centra en el problema de programación de los trabajadores sociales y más tarde en el tema de la preparación por lotes y la selección de pedidos como una generalización del Problema de los trabajadores sociales (SWP). El problema de programación de los trabajadores sociales es una clase de problema de optimización combinatoria que, en el mejor de los casos, puede resolverse en tiempo exponencial; viendo que el SWP es NP-Hard, propone usar otro enfoque mas allá de la computación clásica para su resolución. Hoy en día, el foco en la computación cuántica ya no es sólo por su enorme capacidad informática sino también, por el uso de su imperfección en esta era Noisy Intermediate-Scale Quantum (NISQ) para crear un poderoso dispositivo de aprendizaje automático que usa el principio variacional para resolver problemas de optimización al reducir su clase de complejidad. En la tesis se propone una formulación (cuadrática) para resolver el problema del horario de los trabajadores sociales de manera eficiente usando Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer y ADMM optimizer. La viabilidad cuántica del algoritmo se ha modelado en forma QUBO, con Docplex simulado Cirq, Or-Tools y probado en computadoras IBMQ. Después de analizar los resultados del enfoque anterior, se diseñó un escenario para resolver el SWP como razonamiento basado en casos (qCBR), tanto cuántica como clásicamente. Y así, poder contribuir con un algoritmo cuántico centrado en la inteligencia artificial y el aprendizaje automático. El qCBR es una técnica de aprendizaje automático basada en la resolución de nuevos problemas que utiliza la experiencia, como lo hacen los humanos. La experiencia se representa como una memoria de casos que contiene cuestiones previamente resueltas y usa una técnica de síntesis para adaptar mejor la experiencia al nuevo problema. En la definición de SWP, si en lugar de pacientes se tienen lotes de pedidos y en lugar de trabajadores sociales robots móviles, se generaliza la función objetivo y las restricciones. Para ello, se ha propuesto una prueba de concepto y una nueva formulación para resolver los problemas de picking y batching llamado qRobot. Se hizo una prueba de concepto en esta parte del proyecto a través de una Raspberry Pi 4 y se probó la capacidad de integración de la computación cuántica dentro de la robótica móvil, con uno de los problemas más demandados en este sector industrial: problemas de picking y batching. Se probó en distintas tecnologías y los resultados fueron prometedores. Además, en caso de necesidad computacional, el robot paraleliza parte de las operaciones en computación híbrida (cuántica + clásica), accediendo a CPU y QPU distribuidos en una nube pública o privada. Además, desarrollamos un entorno estable (ARM64) dentro del robot (Raspberry) para ejecutar operaciones de gradiente y otros algoritmos cuánticos en IBMQ, Amazon Braket (D-Wave) y Pennylane de forma local o remota. Para mejorar el tiempo de ejecución de los algoritmos variacionales en esta era NISQ y la siguiente, se ha propuesto EVA: un algoritmo de Aproximación de Valor Exponencial cuántico. Hasta la fecha, el VQE es el buque insignia de la computación cuántica. Hoy en día, en las plataformas de computación cuántica en la nube líderes de mercado, el coste de la experimentación de los circuitos cuánticos es proporcional al número de circuitos que se ejecutan en dichas plataformas. Es decir, con más circuitos mayor coste. Una de las cosas que consigue el VQE, el buque insignia de esta era de pocos qubits, es la poca profundidad al dividir el Hamiltoniano en una lista de muchos pequeños circuitos (matrices de Pauli). Pero este mismo hecho, hace que simular con el VQE sea muy caro en la nube. Por esta misma razón, se diseñó EVA para poder calcular el valor esperado con un único circuito. Aún habiendo respuesto a la hipótesis de este trabajo con todos los estudios realizados, todavía se puede seguir investigando para proponer nuevos algoritmos cuánticos para mejorar problemas de optimización combinatoria.In this research, Combinatorial optimization techniques to solve constraint problems have been examined. A study of the quantum era and market leaders such as IBM, D-Wave, Google, Xanadu, AWS-Braket and Microsoft has been carried out. We have learned about their community, their platforms, the status of their research, and the postulates of quantum mechanics that create the most efficient quantum systems and algorithms. To know if it is possible to solve Constraint Search Problem (CSP) problems more efficiently with quantum computing, a scenario was defined so that both classical and quantum computing would have a good point of reference. First, the proof of concept focuses on the social worker scheduling problem and later on the issue of batch picking and order picking as a generalization of the Social Workers Problem (SWP). The social workers programming problem is a combinatorial optimization problem that can be solved exponentially at best; seeing that the SWP is NP-Hard, it claims using another approach beyond classical computation for its resolution. Today, the focus on quantum computing is no longer only on its enormous computing power but also on the use of its imperfection in this era Noisy Intermediate-Scale Quantum (NISQ) to create a powerful machine learning device that uses the variational principle to solve optimization problems by reducing their complexity class. In the thesis, a (quadratic) formulation is proposed to solve the problem of social workers' schedules efficiently using Variational Quantum Eigensolver (VQE), Quantum Approximate Optimization Algorithm (QAOA), Minimal Eigen Optimizer and ADMM optimizer. The quantum feasibility of the algorithm has been modelled in QUBO form, with Cirq simulated, Or-Tools and tested on IBMQ computers. After analyzing the results of the above approach, a scenario was designed to solve the SWP as quantum case-based reasoning (qCBR), both quantum and classically. And thus, to be able to contribute with a quantum algorithm focused on artificial intelligence and machine learning. The qCBR is a machine learning technique based on solving new problems that use experience, as humans do. The experience is represented as a memory of cases containing previously resolved questions and uses a synthesis technique to adapt the background to the new problem better. In the definition of SWP, if instead of patients there are batches of orders and instead of social workers mobile robots, the objective function and the restrictions are generalized. To do this, a proof of concept and a new formulation has been proposed to solve the problems of picking and batching called qRobot. A proof of concept was carried out in this part of the project through a Raspberry Pi 4 and the integration capacity of quantum computing within mobile robotics was tested, with one of the most demanded problems in this industrial sector: picking and batching problems. It was tested on different technologies, and the results were promising. Furthermore, in case of computational need, the robot parallelizes part of the operations in hybrid computing (quantum + classical), accessing CPU and QPU distributed in a public or private cloud. Furthermore, we developed a stable environment (ARM64) inside the robot (Raspberry) to run gradient operations and other quantum algorithms on IBMQ, Amazon Braket (D-Wave) and Pennylane locally or remotely. To improve the execution time of variational algorithms in this NISQ era and the next, EVA has been proposed: A quantum Exponential Value Approximation algorithm. To date, the VQE is the flagship of quantum computing. Today, in the market-leading quantum cloud computing platforms, the cost of experimenting with quantum circuits is proportional to the number of circuits running on those platforms. That is, with more circuits, higher cost. One of the things that the VQE, the flagship of this low-qubit era, achieves is shallow depth by dividing the Hamiltonian into a list of many small circuits (Pauli matrices). But this very fact makes simulating with VQE very expensive in the cloud. For this same reason, EVA was designed to calculate the expected value with a single circuit. Even having answered the hypothesis of this work with all the studies carried out, it is still possible to continue research to propose new quantum algorithms to improve combinatorial optimization
    corecore