2,398 research outputs found

    Reconfigurable Computing Systems for Robotics using a Component-Oriented Approach

    Get PDF
    Robotic platforms are becoming more complex due to the wide range of modern applications, including multiple heterogeneous sensors and actuators. In order to comply with real-time and power-consumption constraints, these systems need to process a large amount of heterogeneous data from multiple sensors and take action (via actuators), which represents a problem as the resources of these systems have limitations in memory storage, bandwidth, and computational power. Field Programmable Gate Arrays (FPGAs) are programmable logic devices that offer high-speed parallel processing. FPGAs are particularly well-suited for applications that require real-time processing, high bandwidth, and low latency. One of the fundamental advantages of FPGAs is their flexibility in designing hardware tailored to specific needs, making them adaptable to a wide range of applications. They can be programmed to pre-process data close to sensors, which reduces the amount of data that needs to be transferred to other computing resources, improving overall system efficiency. Additionally, the reprogrammability of FPGAs enables them to be repurposed for different applications, providing a cost-effective solution that needs to adapt quickly to changing demands. FPGAs' performance per watt is close to that of Application-Specific Integrated Circuits (ASICs), with the added advantage of being reprogrammable. Despite all the advantages of FPGAs (e.g., energy efficiency, computing capabilities), the robotics community has not fully included them so far as part of their systems for several reasons. First, designing FPGA-based solutions requires hardware knowledge and longer development times as their programmability is more challenging than Central Processing Units (CPUs) or Graphics Processing Units (GPUs). Second, porting a robotics application (or parts of it) from software to an accelerator requires adequate interfaces between software and FPGAs. Third, the robotics workflow is already complex on its own, combining several fields such as mechanics, electronics, and software. There have been partial contributions in the state-of-the-art for FPGAs as part of robotics systems. However, a study of FPGAs as a whole for robotics systems is missing in the literature, which is the primary goal of this dissertation. Three main objectives have been established to accomplish this. (1) Define all components required for an FPGAs-based system for robotics applications as a whole. (2) Establish how all the defined components are related. (3) With the help of Model-Driven Engineering (MDE) techniques, generate these components, deploy them, and integrate them into existing solutions. The component-oriented approach proposed in this dissertation provides a proper solution for designing and implementing FPGA-based designs for robotics applications. The modular architecture, the tool 'FPGA Interfaces for Robotics Middlewares' (FIRM), and the toolchain 'FPGA Architectures for Robotics' (FAR) provide a set of tools and a comprehensive design process that enables the development of complex FPGA-based designs more straightforwardly and efficiently. The component-oriented approach contributed to the state-of-the-art in FPGA-based designs significantly for robotics applications and helps to promote their wider adoption and use by specialists with little FPGA knowledge

    Motion representation with spiking neural networks for grasping and manipulation

    Get PDF
    Die Natur bedient sich Millionen von Jahren der Evolution, um adaptive physikalische Systeme mit effizienten Steuerungsstrategien zu erzeugen. Im Gegensatz zur konventionellen Robotik plant der Mensch nicht einfach eine Bewegung und führt sie aus, sondern es gibt eine Kombination aus mehreren Regelkreisen, die zusammenarbeiten, um den Arm zu bewegen und ein Objekt mit der Hand zu greifen. Mit der Forschung an humanoiden und biologisch inspirierten Robotern werden komplexe kinematische Strukturen und komplizierte Aktor- und Sensorsysteme entwickelt. Diese Systeme sind schwierig zu steuern und zu programmieren, und die klassischen Methoden der Robotik können deren Stärken nicht immer optimal ausnutzen. Die neurowissenschaftliche Forschung hat große Fortschritte beim Verständnis der verschiedenen Gehirnregionen und ihrer entsprechenden Funktionen gemacht. Dennoch basieren die meisten Modelle auf groß angelegten Simulationen, die sich auf die Reproduktion der Konnektivität und der statistischen neuronalen Aktivität konzentrieren. Dies öffnet eine Lücke bei der Anwendung verschiedener Paradigmen, um Gehirnmechanismen und Lernprinzipien zu validieren und Funktionsmodelle zur Steuerung von Robotern zu entwickeln. Ein vielversprechendes Paradigma ist die ereignis-basierte Berechnung mit SNNs. SNNs fokussieren sich auf die biologischen Aspekte von Neuronen und replizieren deren Arbeitsweise. Sie sind für spike- basierte Kommunikation ausgelegt und ermöglichen die Erforschung von Mechanismen des Gehirns für das Lernen mittels neuronaler Plastizität. Spike-basierte Kommunikation nutzt hoch parallelisierten Hardware-Optimierungen mittels neuromorpher Chips, die einen geringen Energieverbrauch und schnelle lokale Operationen ermöglichen. In dieser Arbeit werden verschiedene SNNs zur Durchführung von Bewegungss- teuerung für Manipulations- und Greifaufgaben mit einem Roboterarm und einer anthropomorphen Hand vorgestellt. Diese basieren auf biologisch inspirierten funktionalen Modellen des menschlichen Gehirns. Ein Motor-Primitiv wird auf parametrische Weise mit einem Aktivierungsparameter und einer Abbildungsfunktion auf die Roboterkinematik übertragen. Die Topologie des SNNs spiegelt die kinematische Struktur des Roboters wider. Die Steuerung des Roboters erfolgt über das Joint Position Interface. Um komplexe Bewegungen und Verhaltensweisen modellieren zu können, werden die Primitive in verschiedenen Schichten einer Hierarchie angeordnet. Dies ermöglicht die Kombination und Parametrisierung der Primitiven und die Wiederverwendung von einfachen Primitiven für verschiedene Bewegungen. Es gibt verschiedene Aktivierungsmechanismen für den Parameter, der ein Motorprimitiv steuert — willkürliche, rhythmische und reflexartige. Außerdem bestehen verschiedene Möglichkeiten neue Motorprimitive entweder online oder offline zu lernen. Die Bewegung kann entweder als Funktion modelliert oder durch Imitation der menschlichen Ausführung gelernt werden. Die SNNs können in andere Steuerungssysteme integriert oder mit anderen SNNs kombiniert werden. Die Berechnung der inversen Kinematik oder die Validierung von Konfigurationen für die Planung ist nicht erforderlich, da der Motorprimitivraum nur durchführbare Bewegungen hat und keine ungültigen Konfigurationen enthält. Für die Evaluierung wurden folgende Szenarien betrachtet, das Zeigen auf verschiedene Ziele, das Verfolgen einer Trajektorie, das Ausführen von rhythmischen oder sich wiederholenden Bewegungen, das Ausführen von Reflexen und das Greifen von einfachen Objekten. Zusätzlich werden die Modelle des Arms und der Hand kombiniert und erweitert, um die mehrbeinige Fortbewegung als Anwendungsfall der Steuerungsarchitektur mit Motorprimitiven zu modellieren. Als Anwendungen für einen Arm (3 DoFs) wurden die Erzeugung von Zeigebewegungen und das perzeptionsgetriebene Erreichen von Zielen modelliert. Zur Erzeugung von Zeigebewegun- gen wurde ein Basisprimitiv, das auf den Mittelpunkt einer Ebene zeigt, offline mit vier Korrekturprimitiven kombiniert, die eine neue Trajektorie erzeugen. Für das wahrnehmungsgesteuerte Erreichen eines Ziels werden drei Primitive online kombiniert unter Verwendung eines Zielsignals. Als Anwendungen für eine Fünf-Finger-Hand (9 DoFs) wurden individuelle Finger-aktivierungen und Soft-Grasping mit nachgiebiger Steuerung modelliert. Die Greif- bewegungen werden mit Motor-Primitiven in einer Hierarchie modelliert, wobei die Finger-Primitive die Synergien zwischen den Gelenken und die Hand-Primitive die unterschiedlichen Affordanzen zur Koordination der Finger darstellen. Für jeden Finger werden zwei Reflexe hinzugefügt, zum Aktivieren oder Stoppen der Bewegung bei Kontakt und zum Aktivieren der nachgiebigen Steuerung. Dieser Ansatz bietet enorme Flexibilität, da Motorprimitive wiederverwendet, parametrisiert und auf unterschiedliche Weise kombiniert werden können. Neue Primitive können definiert oder gelernt werden. Ein wichtiger Aspekt dieser Arbeit ist, dass im Gegensatz zu Deep Learning und End-to-End-Lernmethoden, keine umfangreichen Datensätze benötigt werden, um neue Bewegungen zu lernen. Durch die Verwendung von Motorprimitiven kann der gleiche Modellierungsansatz für verschiedene Roboter verwendet werden, indem die Abbildung der Primitive auf die Roboterkinematik neu definiert wird. Die Experimente zeigen, dass durch Motor- primitive die Motorsteuerung für die Manipulation, das Greifen und die Lokomotion vereinfacht werden kann. SNNs für Robotikanwendungen ist immer noch ein Diskussionspunkt. Es gibt keinen State-of-the-Art-Lernalgorithmus, es gibt kein Framework ähnlich dem für Deep Learning, und die Parametrisierung von SNNs ist eine Kunst. Nichtsdestotrotz können Robotikanwendungen - wie Manipulation und Greifen - Benchmarks und realistische Szenarien liefern, um neurowissenschaftliche Modelle zu validieren. Außerdem kann die Robotik die Möglichkeiten der ereignis- basierten Berechnung mit SNNs und neuromorpher Hardware nutzen. Die physikalis- che Nachbildung eines biologischen Systems, das vollständig mit SNNs implementiert und auf echten Robotern evaluiert wurde, kann neue Erkenntnisse darüber liefern, wie der Mensch die Motorsteuerung und Sensorverarbeitung durchführt und wie diese in der Robotik angewendet werden können. Modellfreie Bewegungssteuerungen, inspiriert von den Mechanismen des menschlichen Gehirns, können die Programmierung von Robotern verbessern, indem sie die Steuerung adaptiver und flexibler machen

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    A Fully Automated Robot for the Preparation of Fungal Samples for FTIR Spectroscopy Using Deep Learning

    Get PDF
    Manual preparation of fungal samples for Fourier Transform Infrared (FTIR) spectroscopy involves sample washing, homogenization, concentration and spotting, which requires time-consuming and repetitive operations, making it unsuitable for screening studies. This paper presents the design and development of a fully automated robot for the preparation of fungal samples for FTIR spectroscopy. The whole system was constructed based on a previously-developed ultrasonication robot module, by adding a newly-designed centrifuge module and a newly-developed liquid handling module. The liquid handling module consists of a high accuracy electric pipette for spotting and a low accuracy syringe pump for sample washing and concentration. A dual robotic arm system with a gripper connects all of the hardware components. Furthermore, a camera on the liquid handling module uses deep learning to identify the labware settings, which includes the number and positions of well plates and pipette tips. Machine vision on the ultrasonication robot module can detect the sample wells and return the locations to the liquid handling module, which makes the system hand-free for users. Tight integration of all the modules enables the robot to process up to two 96-well microtiter (MTP) plates of samples simultaneously. Performance evaluation shows the deep learning based approach can detect four classes of labware with high average precision, from 0.93 to 1.0. In addition, tests of all procedures show that the robot is able to provide homogeneous sample spots for FTIR spectroscopy with high positional accuracy and spot coverage rate

    Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures

    Get PDF
    With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex and dynamic environments, it is fundamental to rely on more powerful onboard sensors and usually AI. However, there are limitations to this approach. As AVs are increasingly being integrated in several industries, expectations regarding their cooperation ability is growing, and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The proposed approach is to extend perception to the environment, i.e. outside of the vehicle, by making it smarter, via the deployment of wireless sensors and actuators. This will vastly improve the perception capabilities in dynamic and unpredictable scenarios and often in a cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely on their scale deployment instead of centralized sensing abilities. Consequently, to support the development and deployment of such cooperation actions in a seamless way, we require the usage of co-simulation frameworks, that can encompass multiple perspectives of control and communications for the AVs, the wireless sensors and actuators and other actors in the environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies for integrating several simulation tools, to construct a framework, capable of supporting the development, test and validation of such smart, cooperative environments. This endeavor was undertaken by building upon an existing simulation framework known as AuNa. We extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based sensors. Moreover, we devised a cooperative perception approach within this framework, showcasing its substantial potential and effectiveness. This will enable the demonstration of multiple cooperation scenarios and also ease the deployment phase by relying on the same software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo, tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost e sistemas embebidos, que dependem da sua implementação em grande escala em vez da capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento e implementação destas ações em cooperação, é necessária a utilização de frameworks de co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs, sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa framework de simulação denominada AuNa. Foram expandidas as suas capacidades para facilitar a simulação de cenários cooperativos através da incorporação de sensores externos colocados no ambiente, em vez de depender apenas de sensores montados nos veículos. Além disso, concebemos uma abordagem de perceção cooperativa usando a framework, demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma arquitetura de software

    The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles

    Full text link
    We present a multirotor Unmanned Aerial Vehicle control (UAV) and estimation system for supporting replicable research through realistic simulations and real-world experiments. We propose a unique multi-frame localization paradigm for estimating the states of a UAV in various frames of reference using multiple sensors simultaneously. The system enables complex missions in GNSS and GNSS-denied environments, including outdoor-indoor transitions and the execution of redundant estimators for backing up unreliable localization sources. Two feedback control designs are presented: one for precise and aggressive maneuvers, and the other for stable and smooth flight with a noisy state estimate. The proposed control and estimation pipeline are constructed without using the Euler/Tait-Bryan angle representation of orientation in 3D. Instead, we rely on rotation matrices and a novel heading-based convention to represent the one free rotational degree-of-freedom in 3D of a standard multirotor helicopter. We provide an actively maintained and well-documented open-source implementation, including realistic simulation of UAV, sensors, and localization systems. The proposed system is the product of years of applied research on multi-robot systems, aerial swarms, aerial manipulation, motion planning, and remote sensing. All our results have been supported by real-world system deployment that shaped the system into the form presented here. In addition, the system was utilized during the participation of our team from the CTU in Prague in the prestigious MBZIRC 2017 and 2020 robotics competitions, and also in the DARPA SubT challenge. Each time, our team was able to secure top places among the best competitors from all over the world. On each occasion, the challenges has motivated the team to improve the system and to gain a great amount of high-quality experience within tight deadlines.Comment: 28 pages, 20 figures, submitted to Journal of Intelligent & Robotic Systems (JINT), for the provided open-source software see http://github.com/ctu-mr

    Computing gripping points in 2D parallel surfaces via polygon clipping

    Get PDF

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Autonomous robotic additive manufacturing through distributed model‐free deep reinforcement learning in computational design environments

    Full text link
    AbstractThe objective of autonomous robotic additive manufacturing for construction in the architectural scale is currently being investigated in parts both within the research communities of computational design and robotic fabrication (CDRF) and deep reinforcement learning (DRL) in robotics. The presented study summarizes the relevant state of the art in both research areas and lays out how their respective accomplishments can be combined to achieve higher degrees of autonomy in robotic construction within the Architecture, Engineering and Construction (AEC) industry. A distributed control and communication infrastructure for agent training and task execution is presented, that leverages the potentials of combining tools, standards and algorithms of both fields. It is geared towards industrial CDRF applications. Using this framework, a robotic agent is trained to autonomously plan and build structures using two model-free DRL algorithms (TD3, SAC) in two case studies: robotic block stacking and sensor-adaptive 3D printing. The first case study serves to demonstrate the general applicability of computational design environments for DRL training and the comparative learning success of the utilized algorithms. Case study two highlights the benefit of our setup in terms of tool path planning, geometric state reconstruction, the incorporation of fabrication constraints and action evaluation as part of the training and execution process through parametric modeling routines. The study benefits from highly efficient geometry compression based on convolutional autoencoders (CAE) and signed distance fields (SDF), real-time physics simulation in CAD, industry-grade hardware control and distinct action complementation through geometric scripting. Most of the developed code is provided open source.</jats:p
    corecore