764 research outputs found

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Response time analysis of memory-bandwidth- regulated multiframe mixed-criticality systems

    Get PDF
    The multiframe mixed-criticality task model eliminates the pessimism in many systems where the worst-case execution times (WCETs) of successive jobs vary greatly by design, in a known pattern. Existing feasibility analysis techniques for multiframe mixed-criticality tasks are shared-resource-oblivious, hence un-safe for commercial-o -the-shelf (COTS) multicore platforms with a memory controller shared among all cores. Conversely, the feasibility analyses that account for the interference on shared resource(s) in COTS platforms do not leverage theWCET variation in multiframe tasks. This paper extends the state-of-the-art by presenting analysis that incorporates the memory access stall in memory-bandwidth-regulated multiframe mixed-criticality multicore systems. An exhaustive enumeration approach is proposed for this analysis to further enhance the schedulability success ratio. The running time of the exhaustive analysis is improved by proposing a pruning mechanism that eliminates the combinations of interfering job sequences that subsume others. Experimental evaluation, using synthetic task sets, demonstrates up to 72% improvement in terms of schedulability success ratio, compared to frame-agnostic analysis.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and by national funds through the FCT, within project PREFECT (POCI01-0145-FEDER-029119); by FCT through the European Social Fund (ESF) and the Regional Operational Programme (ROP) Norte 2020, under grant 2020.08045.BD.info:eu-repo/semantics/publishedVersio

    An investigation of the feasibility of a spacecraft multifunctional structure using commercial electrochemical cells

    No full text
    Multifunctional structures offer the potential for large savings in the mass and cost of spacecraft missions. By combining the functions of one or more subsystems with the primary structure, mass is reduced and internal volume freed up for additional payload, or removed to reduce structural mass. Lithium batteries, increasingly preferred to other power storage solutions, can be employed to produce such structures by incorporating prismatic batteries into structural sandwich panels. Such “powerstructures” can reduce the mass and volume of the power storage subsystem.After reviewing the current work in the field of multifunctional structures, this thesis describes the objective of the research, to examine the usefulness and feasibility of a multifunctional structure based on commercial lithium cells and sandwich structures. The next section presents a study that quantifies the benefits of this technology, showing maximum savings of up to 2% of total mass, and 0.5-1% for common spacecraft designs.The next section describes experimental investigations into the mechanical suitability of commercial PLI cells for use in the multifunctional structure. Firstly, the effect of launch vibration was considered: 15 and 25 grms tests showed no measurable loss in electrical performance. Then, the structural attributes of the cells were measured using a dynamic shear test. The shear modulus of the cells was found to be rather lower than that of an aluminium honeycomb core material.Consideration is then given to the practical implications of a multifunctional structure. The feasibility of manufacturing is assessed through the construction of a trial panel, showing that the cells lose some capacity and suffer an increase in internal resistance in a high-temperature adhesive cure and that a cold-bonding process may thus be preferable. The resultant panel was then vibrated on an electrodynamic shaker to both assess the resilience of the cells and test the reliability of finite element models. These finite element models are then used for a simple optimisation, showing that a welldesigned powerstructure can have structural performance comparable to a conventional design.The final section weighs the benefits of using a multifunctional structure against the potential disadvantages in terms of cost, design time and flexibility, as well as assessing the validity of assumptions made in the work. The conclusion is that a multifunctional structure of this type, whilst not worthwhile for all mission types, could potentially increase the feasibility of short-term spacecraft missions using small satellites (of the order of 100 kg) with large energy storage requirements

    3D-Stereoscopic Immersive Analytics Projects at Monash University and University of Konstanz

    Get PDF
    Immersive Analytics investigates how novel interaction and display technologies may support analytical reasoning and decision making. The Immersive Analytics initiative of Monash University started early 2014. Over the last few years, a number of projects have been developed or extended in this context to meet the requirements of semi- or full-immersive stereoscopic environments. Different technologies are used for this purpose: CAVE2™ (a 330 degree large-scale visualization environment which can be used for educative and scientific group presentations, analyses and discussions), stereoscopic Powerwalls (miniCAVEs, representing a segment of the CAVE2 and used for development and communication), Fishtanks, and/or HMDs (such as Oculus, VIVE, and mobile HMD approaches). Apart from CAVE2™ all systems are or will be employed on both the Monash University and the University of Konstanz side, especially to investigate collaborative Immersive Analytics. In addition, sensiLab extends most of the previous approaches by involving all senses, 3D visualization is combined with multi-sensory feedback, 3D printing, robotics in a scientific-artistic-creative environment

    Analysis of Real-Time Capabilities of Dynamic Scheduled System

    Get PDF
    This PhD-thesis explores different real-time scheduling approaches to effectively utilize industrial real-time applications on multicore or manycore platforms. The proposed scheduling policy is named the Time-Triggered Constant Phase scheduler for handling periodic tasks, which determines time windows for each computation and communication in advance by using the dependent task model

    Snapshot hyperspectral imaging : near-infrared image replicating imaging spectrometer and achromatisation of Wollaston prisms

    Get PDF
    Conventional hyperspectral imaging (HSI) techniques are time-sequential and rely on temporal scanning to capture hyperspectral images. This temporal constraint can limit the application of HSI to static scenes and platforms, where transient and dynamic events are not expected during data capture. The Near-Infrared Image Replicating Imaging Spectrometer (N-IRIS) sensor described in this thesis enables snapshot HSI in the short-wave infrared (SWIR), without the requirement for scanning and operates without rejection in polarised light. It operates in eight wavebands from 1.1μm to 1.7μm with a 2.0° diagonal field-of-view. N-IRIS produces spectral images directly, without the need for prior topographic or image reconstruction. Additional benefits include compactness, robustness, static operation, lower processing overheads, higher signal-to-noise ratio and higher optical throughput with respect to other HSI snapshot sensors generally. This thesis covers the IRIS design process from theoretical concepts to quantitative modelling, culminating in the N-IRIS prototype designed for SWIR imaging. This effort formed the logical step in advancing from peer efforts, which focussed upon the visible wavelengths. After acceptance testing to verify optical parameters, empirical laboratory trials were carried out. This testing focussed on discriminating between common materials within a controlled environment as proof-of-concept. Significance tests were used to provide an initial test of N-IRIS capability in distinguishing materials with respect to using a conventional SWIR broadband sensor. Motivated by the design and assembly of a cost-effective visible IRIS, an innovative solution was developed for the problem of chromatic variation in the splitting angle (CVSA) of Wollaston prisms. CVSA introduces spectral blurring of images. Analytical theory is presented and is illustrated with an example N-IRIS application where a sixfold reduction in dispersion is achieved for wavelengths in the region 400nm to 1.7μm, although the principle is applicable from ultraviolet to thermal-IR wavelengths. Experimental proof of concept is demonstrated and the spectral smearing of an achromatised N-IRIS is shown to be reduced by an order of magnitude. These achromatised prisms can provide benefits to areas beyond hyperspectral imaging, such as microscopy, laser pulse control and spectrometry

    Enabling Scalable and Sustainable Softwarized 5G Environments

    Get PDF
    The fifth generation of telecommunication systems (5G) is foreseen to play a fundamental role in our socio-economic growth by supporting various and radically new vertical applications (such as Industry 4.0, eHealth, Smart Cities/Electrical Grids, to name a few), as a one-fits-all technology that is enabled by emerging softwarization solutions \u2013 specifically, the Fog, Multi-access Edge Computing (MEC), Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) paradigms. Notwithstanding the notable potential of the aforementioned technologies, a number of open issues still need to be addressed to ensure their complete rollout. This thesis is particularly developed towards addressing the scalability and sustainability issues in softwarized 5G environments through contributions in three research axes: a) Infrastructure Modeling and Analytics, b) Network Slicing and Mobility Management, and c) Network/Services Management and Control. The main contributions include a model-based analytics approach for real-time workload profiling and estimation of network key performance indicators (KPIs) in NFV infrastructures (NFVIs), as well as a SDN-based multi-clustering approach to scale geo-distributed virtual tenant networks (VTNs) and to support seamless user/service mobility; building on these, solutions to the problems of resource consolidation, service migration, and load balancing are also developed in the context of 5G. All in all, this generally entails the adoption of Stochastic Models, Mathematical Programming, Queueing Theory, Graph Theory and Team Theory principles, in the context of Green Networking, NFV and SDN

    Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    Get PDF
    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power

    The Customizable Virtual FPGA: Generation, System Integration and Configuration of Application-Specific Heterogeneous FPGA Architectures

    Get PDF
    In den vergangenen drei Jahrzehnten wurde die Entwicklung von Field Programmable Gate Arrays (FPGAs) stark von Moore’s Gesetz, Prozesstechnologie (Skalierung) und kommerziellen Märkten beeinflusst. State-of-the-Art FPGAs bewegen sich einerseits dem Allzweck näher, aber andererseits, da FPGAs immer mehr traditionelle Domänen der Anwendungsspezifischen integrierten Schaltungen (ASICs) ersetzt haben, steigen die Effizienzerwartungen. Mit dem Ende der Dennard-Skalierung können Effizienzsteigerungen nicht mehr auf Technologie-Skalierung allein zurückgreifen. Diese Facetten und Trends in Richtung rekonfigurierbarer System-on-Chips (SoCs) und neuen Low-Power-Anwendungen wie Cyber Physical Systems und Internet of Things erfordern eine bessere Anpassung der Ziel-FPGAs. Neben den Trends für den Mainstream-Einsatz von FPGAs in Produkten des täglichen Bedarfs und Services wird es vor allem bei den jüngsten Entwicklungen, FPGAs in Rechenzentren und Cloud-Services einzusetzen, notwendig sein, eine sofortige Portabilität von Applikationen über aktuelle und zukünftige FPGA-Geräte hinweg zu gewährleisten. In diesem Zusammenhang kann die Hardware-Virtualisierung ein nahtloses Mittel für Plattformunabhängigkeit und Portabilität sein. Ehrlich gesagt stehen die Zwecke der Anpassung und der Virtualisierung eigentlich in einem Konfliktfeld, da die Anpassung für die Effizienzsteigerung vorgesehen ist, während jedoch die Virtualisierung zusätzlichen Flächenaufwand hinzufügt. Die Virtualisierung profitiert aber nicht nur von der Anpassung, sondern fügt auch mehr Flexibilität hinzu, da die Architektur jederzeit verändert werden kann. Diese Besonderheit kann für adaptive Systeme ausgenutzt werden. Sowohl die Anpassung als auch die Virtualisierung von FPGA-Architekturen wurden in der Industrie bisher kaum adressiert. Trotz einiger existierenden akademischen Werke können diese Techniken noch als unerforscht betrachtet werden und sind aufstrebende Forschungsgebiete. Das Hauptziel dieser Arbeit ist die Generierung von FPGA-Architekturen, die auf eine effiziente Anpassung an die Applikation zugeschnitten sind. Im Gegensatz zum üblichen Ansatz mit kommerziellen FPGAs, bei denen die FPGA-Architektur als gegeben betrachtet wird und die Applikation auf die vorhandenen Ressourcen abgebildet wird, folgt diese Arbeit einem neuen Paradigma, in dem die Applikation oder Applikationsklasse fest steht und die Zielarchitektur auf die effiziente Anpassung an die Applikation zugeschnitten ist. Dies resultiert in angepassten anwendungsspezifischen FPGAs. Die drei Säulen dieser Arbeit sind die Aspekte der Virtualisierung, der Anpassung und des Frameworks. Das zentrale Element ist eine weitgehend parametrierbare virtuelle FPGA-Architektur, die V-FPGA genannt wird, wobei sie als primäres Ziel auf jeden kommerziellen FPGA abgebildet werden kann, während Anwendungen auf der virtuellen Schicht ausgeführt werden. Dies sorgt für Portabilität und Migration auch auf Bitstream-Ebene, da die Spezifikation der virtuellen Schicht bestehen bleibt, während die physische Plattform ausgetauscht werden kann. Darüber hinaus wird diese Technik genutzt, um eine dynamische und partielle Rekonfiguration auf Plattformen zu ermöglichen, die sie nicht nativ unterstützen. Neben der Virtualisierung soll die V-FPGA-Architektur auch als eingebettetes FPGA in ein ASIC integriert werden, das effiziente und dennoch flexible System-on-Chip-Lösungen bietet. Daher werden Zieltechnologie-Abbildungs-Methoden sowohl für Virtualisierung als auch für die physikalische Umsetzung adressiert und ein Beispiel für die physikalische Umsetzung in einem 45 nm Standardzellen Ansatz aufgezeigt. Die hochflexible V-FPGA-Architektur kann mit mehr als 20 Parametern angepasst werden, darunter LUT-Grösse, Clustering, 3D-Stacking, Routing-Struktur und vieles mehr. Die Auswirkungen der Parameter auf Fläche und Leistung der Architektur werden untersucht und eine umfangreiche Analyse von über 1400 Benchmarkläufen zeigt eine hohe Parameterempfindlichkeit bei Abweichungen bis zu ±95, 9% in der Fläche und ±78, 1% in der Leistung, was die hohe Bedeutung von Anpassung für Effizienz aufzeigt. Um die Parameter systematisch an die Bedürfnisse der Applikation anzupassen, wird eine parametrische Entwurfsraum-Explorationsmethode auf der Basis geeigneter Flächen- und Zeitmodellen vorgeschlagen. Eine Herausforderung von angepassten Architekturen ist der Entwurfsaufwand und die Notwendigkeit für angepasste Werkzeuge. Daher umfasst diese Arbeit ein Framework für die Architekturgenerierung, die Entwurfsraumexploration, die Anwendungsabbildung und die Evaluation. Vor allem ist der V-FPGA in einem vollständig synthetisierbaren generischen Very High Speed Integrated Circuit Hardware Description Language (VHDL) Code konzipiert, der sehr flexibel ist und die Notwendigkeit für externe Codegeneratoren eliminiert. Systementwickler können von verschiedenen Arten von generischen SoC-Architekturvorlagen profitieren, um die Entwicklungszeit zu reduzieren. Alle notwendigen Konstruktionsschritte für die Applikationsentwicklung und -abbildung auf den V-FPGA werden durch einen Tool-Flow für Entwurfsautomatisierung unterstützt, der eine Sammlung von vorhandenen kommerziellen und akademischen Werkzeugen ausnutzt, die durch geeignete Modelle angepasst und durch ein neues Werkzeug namens V-FPGA-Explorer ergänzt werden. Dieses neue Tool fungiert nicht nur als Back-End-Tool für die Anwendungsabbildung auf dem V-FPGA sondern ist auch ein grafischer Konfigurations- und Layout-Editor, ein Bitstream-Generator, ein Architekturdatei-Generator für die Place & Route Tools, ein Script-Generator und ein Testbenchgenerator. Eine Besonderheit ist die Unterstützung der Just-in-Time-Kompilierung mit schnellen Algorithmen für die In-System Anwendungsabbildung. Die Arbeit schliesst mit einigen Anwendungsfällen aus den Bereichen industrielle Prozessautomatisierung, medizinische Bildgebung, adaptive Systeme und Lehre ab, in denen der V-FPGA eingesetzt wird
    corecore