1,425 research outputs found

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    CHARM-Card: Hardware Based Cluster Control And Management System

    Get PDF
    Die Selektion und Analyse von Ereignisdaten des Schwerionen-Experiments ALICE am CERN werden durch sogenannte Triggerstufen vorgenommen. Der High Level Trigger (HLT) ist die letzte Triggerstufe des Experimentes. Er besteht aus einer Rechnerfarm von zur Zeit über 120 Computer, die auf 300 Rechner ausgebaut werden soll. Die manuelle Installation, Konfiguration und Wartung einer Rechnerfarm dieser Größe sind dabei jedoch sehr aufwändig und zeitintensiv. Die vorliegende Arbeit beschreibt die Implementierung und Funktionsweise einer autonomen Steuereinheit, die in jedem Rechner des HLT Computer Clusters eingebaut wurde. Die Hauptaufgaben der Steuereinheit sind die Fernsteuerung der Knoten und die automatische Installation, Überwachung und Wartung derselben. Ein weiteres erreichtes Ziel ist die universelle Nutzung der Steuereinheit: Denn aufgrund der heterogenen Clusterstruktur durfte es keine Einschränkungen für den Betrieb der Steuereinheit bezüglich des Rechnermodells oder des Betriebssystems der Clusterknoten geben. Dadurch lassen sich auch kostengünstige COTS (commercial-off-the-shelf) Rechner als Knoten einsetzen, ohne dabei auf die Fernwartungsfunktionen zu verzichten, wie sie in teuren Serverrechner zu finden sind. Die Steuereinheit ist bereits im Einsatz und ermöglicht die Fernwartung aller Rechner des HLT Clusters. Des Weiteren wurde die gesamte HLT Rechnerfarm mit Hilfe der Steuereinheit automatisch installiert, getestet und konfiguriert

    An open framework for highly concurrent hardware-in-the-loop simulation

    Get PDF
    Hardware-in-the-loop (HIL) simulation is becoming a significant tool in prototyping complex, highly available systems. The HIL approach allows an engineer to build a physical system incrementally by enabling real components of the system to seamlessly interface with simulated components. It also permits testing of hardware prototypes of components that would be extremely costly to test in the deployed environment. Key issues are the ability to wrap the systems of equations (such as Partial Differential Equations) describing the deployed environment into real-time software models, provide low synchronization overhead between the hardware and software, and reduce reliance on proprietary platforms. This thesis introduces an open source HIL simulation framework that can be ported to any standard Unix-like system on any shared-memory multiprocessor computer, requires minimal operating system scheduler controls, provides a soft real-time guarantee for any constituent simulation that does likewise, enables an asynchronous user interface, and allows for an arbitrary number of secondary control components --Abstract, page iii

    VizLab: The Design and Implementation of An Immersive Virtual Environment System Using Game Engine Technology and Open Source Software

    Get PDF
    Virtual Reality (VR) is a term used to describe computer-simulated environments that can immerse users in a real or unreal world. Immersive systems are an essential component when experiencing virtual environments. Developing VR applications is time-consuming, and developers use many resources in creating VR applications. The separate components require integration, and the challenges in using public domain open source software present complex software development. The VizLab Virtual Reality System was created to meet these challenges and provide an integrated suite of tools for VR system development. VizLab supports the development of VR applications by using game engine and CAVE system technology. The system consists of software modules that provide rendering, texturing, collision, physics, window/viewport management, cluster synchronization, input management, multi-processing, stereoscopic 3D, and networking. VizLab combines the main functional aspects of a game engine and CAVE system for an improved approach to developing VR applications, virtual environments, and immersive environments
    corecore