102 research outputs found

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Integrated SiPh Flex-LIONS Module for All-to-All Optical Interconnects with Bandwidth Steering

    Get PDF
    We experimentally demonstrate the first all-to-all optical interconnects with bandwidth steering using an integrated 8Ă—8 SiPh Flex-LIONS module. Experimental results show a 5-dB worst-case crosstalk penalty and 25 Gb/s to 100 Gb/s bandwidth steerin

    Multi-FSR Silicon Photonic Flex-LIONS Module for Bandwidth-Reconfigurable All-to-All Optical Interconnects

    Get PDF
    This article proposes and experimentally demonstrates the first bandwidth-reconfigurable all-to-all optical interconnects using a multi-Free-Spectral-Ranges (FSR) integrated 8 × 8 SiPh Flex-LIONS module. The multi-FSR operation utilizes the first FSR (FSR1) to steer the bandwidth between selected node pairs and the zeroth FSR (FSR0) to guarantee a minimum diameter all-to-all topology among the interconnected nodes after reconfiguration. Successful Flex-LIONS design, fabrication, packaging, and system testing demonstrate error-free all-to-all interconnects for both FSR0 and FSR1 with a 5.3-dB power penalty induced by AWGR intra-band crosstalk under the worst-case polarization scenario. After reconfiguration in FSR1, the bandwidth between the selected pair of nodes is increased from 50 to 125 Gb/s while maintaining a 25 Gb/s/λ all-to-all interconnectivity in FSR0

    Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs

    Get PDF
    Many emerging applications of nano-sized unmanned aerial vehicles (UAVs), with a few form-factor, revolve around safely interacting with humans in complex scenarios, for example, monitoring their activities or looking after people needing care. Such sophisticated autonomous functionality must be achieved while dealing with severe constraints in payload, battery, and power budget ( 100). In this work, we attack a complex task going from perception to control: to estimate and maintain the nano-UAV’s relative 3D pose with respect to a person while they freely move in the environment – a task that, to the best of our knowledge, has never previously been targeted with fully onboard computation on a nano-sized UAV. Our approach is centered around a novel vision-based deep neural network (DNN), called PULP-Frontnet, designed for deployment on top of a parallel ultra-low-power (PULP) processor aboard a nano-UAV. We present a vertically integrated approach starting from the DNN model design, training, and dataset augmentation down to 8-bit quantization and deployment in-field. PULP-Frontnet can operate in real-time (up to 135frame/), consuming less than 87 for processing at peak throughput and down to 0.43/frame in the most energy-efficient operating point. Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a tiny 27-grams Crazyflie 2.1 nano-UAV. Compared against an ideal sensing setup, onboard pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41, ideal: 26) and angular (onboard: 3.7, ideal: 4.1). We publicly release videos and the source code of our work

    Fully Onboard AI-Powered Human-Drone Pose Estimation on Ultralow-Power Autonomous Flying Nano-UAVs

    Get PDF
    Many emerging applications of nano-sized unmanned aerial vehicles (UAVs), with a few cm(2) form-factor, revolve around safely interacting with humans in complex scenarios, for example, monitoring their activities or looking after people needing care. Such sophisticated autonomous functionality must be achieved while dealing with severe constraints in payload, battery, and power budget (similar to 100 mW). In this work, we attack a complex task going from perception to control: to estimate and maintain the nano-UAV's relative 3-D pose with respect to a person while they freely move in the environment-a task that, to the best of our knowledge, has never previously been targeted with fully onboard computation on a nano-sized UAV. Our approach is centered around a novel vision-based deep neural network (DNN), called Frontnet, designed for deployment on top of a parallel ultra-low power (PULP) processor aboard a nano-UAV. We present a vertically integrated approach starting from the DNN model design, training, and dataset augmentation down to 8-bit quantization and deployment in-field. PULP-Frontnet can operate in real-time (up to 135 frame/s), consuming less than 87 mW for processing at peak throughput and down to 0.43 mJ/frame in the most energy-efficient operating point. Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a tiny 27-g Crazyflie 2.1 nano-UAV. Compared against an ideal sensing setup, onboard pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41 cm, ideal: 26 cm) and angular (onboard: 3.7 degrees, ideal: 4.1 degrees). We publicly release videos and the source code of our work

    CLICK-A: Optical Communication Experiments From a CubeSat Downlink Terminal

    Get PDF
    The CubeSat Laser Infrared CrosslinK (CLICK) mission is a technology demonstration of low size, weight, and power (SWaP) CubeSat optical communication terminals for downlink and crosslinks. The mission is broken into two phases: CLICK-A, which consists of a downlink terminal hosted in a 3U CubeSat, and CLICK-B/C, which consists of a pair of crosslink terminals each hosted in their own 3U CubeSat. This work focuses on the CLICK-A 1.2U downlink terminal, whose goal was to establish a 10 Mbps link to a low-cost portable 28 cm optical ground station called PorTeL. The terminal communicates with M-ary pulse position modulation (PPM) at 1550 nm using a 200 mW Erbium-doped fiber amplifier (EDFA) with a 1.3 mrad FWHM beam divergence. CLICK-A ultimately serves as a risk reduction phase for the CLICK-B/C terminals, with many components first being demonstrated on CLICK-A. CLICK-A was launched to the International Space Station on July 15th, 2022 and was deployed by Nanoracks on September 6th, 2022 into a 51.6° 414 km orbit. We present the results of experiments performed by the mission with the optical ground station located at MIT Wallace Astrophysical Observatory in Westford, MA. Successful acquisition of an Earth to space 5 mrad FWHM (5 Watts at 976 nm) pointing beacon was demonstrated by the terminal on the second experiment on November 2nd, 2022. First light on the optical ground station tracking camera was established on the third experiment on November 10th, 2022. The optical ground station showed sufficient open, coarse, and fine tracking performance to support links with the terminal with a closed-loop RMS tracking error of 0.053 mrad. Results of three optical downlink experiments that produced beacon tracking results are discussed. These experiments demonstrated that the internal microelectromechanical system (MEMS) fine steering mirror (FSM) corrected for an average blind spacecraft pointing error of 8.494 mrad and maintained an average RMS pointing error of 0.175 mrad after initial blind pointing error correction. With these results, the terminal demonstrated the ability to achieve sufficient fine pointing of the 1.3 mrad FWHM optical communication beam without pointing feedback from the terminal to improve the nominal spacecraft pointing. Spacecraft drag reduction maneuvers were used to extend mission life and inform the mission operations of the CLICK-B/C phase of the mission. Results from the spacecraft drag maneuvers are also presented

    KPI-related monitoring, analysis, and adaptation of business processes

    Get PDF
    In today's companies, business processes are increasingly supported by IT systems. They can be implemented as service orchestrations, for example in WS-BPEL, running on Business Process Management (BPM) systems. A service orchestration implements a business process by orchestrating a set of services. These services can be arbitrary IT functionality, human tasks, or again service orchestrations. Often, these business processes are implemented as part of business-to-business collaborations spanning several participating organizations. Service choreographies focus on modeling how processes of different participants interact in such collaborations. An important aspect in BPM is performance management. Performance is measured in terms of Key Performance Indicators (KPIs), which reflect the achievement towards business goals. KPIs are based on domain-specific metrics typically reflecting the time, cost, and quality dimensions. Dealing with KPIs involves several phases, namely monitoring, analysis, and adaptation. In a first step, KPIs have to be monitored in order to evaluate the current process performance. In case monitoring shows negative results, there is a need for analyzing and understanding the reasons why KPI targets are not reached. Finally, after identifying the influential factors of KPIs, the processes have to be adapted in order to improve the performance. %The goal thereby is to enable these phases in an automated manner. This thesis presents an approach how KPIs can be monitored, analyzed, and used for adaptation of processes. The concrete contributions of this thesis are: (i) an approach for monitoring of processes and their KPIs in service choreographies; (ii) a KPI dependency analysis approach based on classification learning which enables explaining how KPIs depend on a set of influential factors; (iii) a runtime adaptation approach which combines monitoring and KPI analysis in order to enable proactive adaptation of processes for improving the KPI performance; (iv) a prototypical implementation and experiment-based evaluation.Die Ausführung von Geschäftsprozessen wird heute zunehmend durch IT-Systeme unterstützt und auf Basis einer serviceorientierten Architektur umgesetzt. Die Prozesse werden dabei häufig als Service Orchestrierungen implementiert, z.B. in WS-BPEL. Eine Service Orchestrierung interagiert mit Services, die automatisiert oder durch Menschen ausgeführt werden, und wird durch eine Prozessausführungsumgebung ausgeführt. Darüber hinaus werden Geschäftsprozesse oft nicht in Isolation ausgeführt sondern interagieren mit weiteren Geschäftsprozessen, z.B. als Teil von Business-to-Business Beziehungen. Die Interaktionen der Prozesse werden dabei in Service Choreographien modelliert. Ein wichtiger Aspekt des Geschäftsprozessmanagements ist die Optimierung der Prozesse in Bezug auf ihre Performance, die mit Hilfe von Key Performance Indicators (KPIs) gemessen wird. KPIs basieren auf Prozessmetriken, die typischerweise die Dimensionen Zeit, Kosten und Qualität abbilden, und evaluieren diese in Bezug auf die Erreichung von Unternehmenszielen. Die Optimierung der Prozesse in Bezug auf ihre KPIs umfasst mehrere Phasen. Im ersten Schritt müssen KPIs durch Monitoring der Prozesse zur Laufzeit erhoben werden. Falls die KPI Werte nicht zufriedenstellend sind, werden im nächsten Schritt die Faktoren analysiert, die die KPI Werte beeinflussen. Schließlich werden auf Basis dieser Analyse die Prozesse angepasst um die KPIs zu verbessern. In dieser Arbeit wird ein integrierter Ansatz für das Monitoring, die Analyse und automatisierte Adaption von Prozessen mit dem Ziel der Optimierung hinsichtlich der KPIs vorgestellt. Die Beiträge der Arbeit sind wie folgt: (i) ein Ansatz zum Monitoring von KPIs über einzelne Prozesse hinweg in Service Choreographien, (ii) ein Ansatz zur Analyse von beeinflussenden Faktoren von KPIs auf Basis von Entscheidungsbäumen, (iii) ein Ansatz zur automatisierten, proaktiven Adaption von Prozessen zur Laufzeit auf Basis des Monitorings und der KPI Analyse, (iv) eine prototypische Implementierung und experimentelle Evaluierung
    • …
    corecore