2,293 research outputs found

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Robot control with biological cells

    No full text
    At present there exists a large gap in size, performance, adaptability and robustness between natural and artificial information processors for performing coherent perception-action tasks under real-time constraints. Even the simplest organisms have an enviable capability of coping with an unknown dynamic environment. Robots, in contrast, are still clumsy if confronted with such complexity. This paper presents a bio-hybrid architecture developed for exploring an alternate approach to the control of autonomous robots. Circuits prepared from amoeboid plasmodia of the slime mold Physarum polycephalum are interfaced with an omnidirectional hexapod robot. Sensory signals from the macro-physical environment of the robot are transduced to cellular scale and processed using the unique micro-physical features of intracellular information processing. Conversely, the response form the cellular computation is amplified to yield a macroscopic output action in the environment mediated through the robot’s actuators

    Semi-dense SLAM on an FPGA SoC

    No full text
    Deploying advanced Simultaneous Localisation and Mapping, or SLAM, algorithms in autonomous low-power robotics will enable emerging new applications which require an accurate and information rich reconstruction of the environment. This has not been achieved so far because accuracy and dense 3D reconstruction come with a high computational complexity. This paper discusses custom hardware design on a novel platform for embedded SLAM, an FPGA-SoC, combining an embedded CPU and programmable logic on the same chip. The use of programmable logic, tightly integrated with an efficient multicore embedded CPU stands to provide an effective solution to this problem. In this work an average framerate of more than 4 frames/second for a resolution of 320×240 has been achieved with an estimated power of less than 1 Watt for the custom hardware. In comparison to the software-only version, running on a dual-core ARM processor, an acceleration of 2× has been achieved for LSD-SLAM, without any compromise in the quality of the result

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201

    A Perspective on Cephalopods Mimicry and Bioinspired Technologies toward Proprioceptive Autonomous Soft Robots

    Get PDF
    Octopus skin is an amazing source of inspiration for bioinspired sensors, actuators and control solutions in soft robotics. Soft organic materials, biomacromolecules and protein ingredients in octopus skin combined with a distributed intelligence, result in adaptive displays that can control emerging optical behavior, and 3D surface textures with rough geometries, with a remarkably high control speed (≈ms). To be able to replicate deformable and compliant materials capable of translating mechanical perturbations in molecular and structural chromogenic outputs, could be a glorious achievement in materials science and in the technological field. Soft robots are suitable platforms for soft multi-responsive materials, which can provide them with improved mechanical proprioception and related smarter behaviors. Indeed, a system provided with a “learning and recognition” functions, and a constitutive “mechanical” and “material intelligence” can result in an improved morphological adaptation in multi-variate environments responding to external and internal stimuli. This review aims to explore challenges and opportunities related to smart and chromogenic responsive materials for adaptive displays, reconfigurable and programmable soft skin, proprioceptive sensing system, and synthetic nervous control units for data processing, toward autonomous soft robots able to communicate and interact with users in open-world scenarios

    A ROS-Based Open Tool for Controlling an Educational Mobile Robot

    Get PDF
    Commercial educational robots provide an accessible entry point into the world of robotics. However, their programming is often limited to specific platforms, which can make it challenging to acquire the skills necessary for industry and research. In this study, we introduce an open-access tool developed using C++ and Arduino IDE that enables us to manage a commercial mobile robot through the Robot Operating System (ROS) middleware. This provides programmers with the ability to work in a powerful programming environment, such as Python. The robot used is the CrowBot BOLT, a kit based on ESP32 that enables wireless communication and includes various peripherals for application development. The mobile robot topics include robot velocities, RGB LEDs, a buzzer, a programmable button, and proximity, light, and line sensors. The proposal is assessed using two controllers: one for proximity and the other for tracking angular light. Both controllers are developed using Visual Studio Code. The experimental results demonstrated the proper functioning of the tool. Additionally, the response time was evaluated, and it was found that optimal performance is achieved at a frequency of 10 Hz. In summary, this proposal provides an accessible option for students and developers seeking to gain skills in robotics using ROS. The project’s repository is located at https://github.com/joseVarelaAldas/ROS-Crowbot
    corecore