3,416 research outputs found

    Building real-time embedded applications on QduinoMC: a web-connected 3D printer case study

    Full text link
    Single Board Computers (SBCs) are now emerging with multiple cores, ADCs, GPIOs, PWM channels, integrated graphics, and several serial bus interfaces. The low power consumption, small form factor and I/O interface capabilities of SBCs with sensors and actuators makes them ideal in embedded and real-time applications. However, most SBCs run non-realtime operating systems based on Linux and Windows, and do not provide a user-friendly API for application development. This paper presents QduinoMC, a multicore extension to the popular Arduino programming environment, which runs on the Quest real-time operating system. QduinoMC is an extension of our earlier single-core, real-time, multithreaded Qduino API. We show the utility of QduinoMC by applying it to a specific application: a web-connected 3D printer. This differs from existing 3D printers, which run relatively simple firmware and lack operating system support to spool multiple jobs, or interoperate with other devices (e.g., in a print farm). We show how QduinoMC empowers devices with the capabilities to run new services without impacting their timing guarantees. While it is possible to modify existing operating systems to provide suitable timing guarantees, the effort to do so is cumbersome and does not provide the ease of programming afforded by QduinoMC.http://www.cs.bu.edu/fac/richwest/papers/rtas_2017.pdfAccepted manuscrip

    The problems you're having may not be the problems you think you're having: results from a latency study of windows NT

    Get PDF
    ManuscriptThis paper is intended to catalyze discussions on two intertwined systems topics. First, it presents early results from a latency study of Windows NT that identifies some specific causes of long thread scheduling latencies, many of which delay the dispatching of runnable threads for tens of milliseconds. Reasons for these delays, including technical, methodological, and economic are presented and possible solutions are discussed. Secondly, and equally importantly, it is intended to serve as a cautionary tale against believing one's own intuition about the causes of poor system performance. We went into this study believing we understood a number of the causes of these delays, with our beliefs informed more by conventional wisdom and hunches than data. In nearly all cases the reasons we discovered via instrumentation and measurement surprised us. In fact, some directly contradicted "facts" we thought we "knew"

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    Quest-V: A Virtualized Multikernel for High-Confidence Systems

    Full text link
    This paper outlines the design of `Quest-V', which is implemented as a collection of separate kernels operating together as a distributed system on a chip. Quest-V uses virtualization techniques to isolate kernels and prevent local faults from affecting remote kernels. This leads to a high-confidence multikernel approach, where failures of system subcomponents do not render the entire system inoperable. A virtual machine monitor for each kernel keeps track of shadow page table mappings that control immutable memory access capabilities. This ensures a level of security and fault tolerance in situations where a service in one kernel fails, or is corrupted by a malicious attack. Communication is supported between kernels using shared memory regions for message passing. Similarly, device driver data structures are shareable between kernels to avoid the need for complex I/O virtualization, or communication with a dedicated kernel responsible for I/O. In Quest-V, device interrupts are delivered directly to a kernel, rather than via a monitor that determines the destination. Apart from bootstrapping each kernel, handling faults and managing shadow page tables, the monitors are not needed. This differs from conventional virtual machine systems in which a central monitor, or hypervisor, is responsible for scheduling and management of host resources amongst a set of guest kernels. In this paper we show how Quest-V can implement novel fault isolation and recovery techniques that are not possible with conventional systems. We also show how the costs of using virtualization for isolation of system services does not add undue overheads to the overall system performance

    Advanced information processing system: Local system services

    Get PDF
    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented

    Expert system decision support for low-cost launch vehicle operations

    Get PDF
    Progress in assessing the feasibility, benefits, and risks associated with AI expert systems applied to low cost expendable launch vehicle systems is described. Part one identified potential application areas in vehicle operations and on-board functions, assessed measures of cost benefit, and identified key technologies to aid in the implementation of decision support systems in this environment. Part two of the program began the development of prototypes to demonstrate real-time vehicle checkout with controller and diagnostic/analysis intelligent systems and to gather true measures of cost savings vs. conventional software, verification and validation requirements, and maintainability improvement. The main objective of the expert advanced development projects was to provide a robust intelligent system for control/analysis that must be performed within a specified real-time window in order to meet the demands of the given application. The efforts to develop the two prototypes are described. Prime emphasis was on a controller expert system to show real-time performance in a cryogenic propellant loading application and safety validation implementation of this system experimentally, using commercial-off-the-shelf software tools and object oriented programming techniques. This smart ground support equipment prototype is based in C with imbedded expert system rules written in the CLIPS protocol. The relational database, ORACLE, provides non-real-time data support. The second demonstration develops the vehicle/ground intelligent automation concept, from phase one, to show cooperation between multiple expert systems. This automated test conductor (ATC) prototype utilizes a knowledge-bus approach for intelligent information processing by use of virtual sensors and blackboards to solve complex problems. It incorporates distributed processing of real-time data and object-oriented techniques for command, configuration control, and auto-code generation

    MGSim - Simulation tools for multi-core processor architectures

    Get PDF
    MGSim is an open source discrete event simulator for on-chip hardware components, developed at the University of Amsterdam. It is intended to be a research and teaching vehicle to study the fine-grained hardware/software interactions on many-core and hardware multithreaded processors. It includes support for core models with different instruction sets, a configurable multi-core interconnect, multiple configurable cache and memory models, a dedicated I/O subsystem, and comprehensive monitoring and interaction facilities. The default model configuration shipped with MGSim implements Microgrids, a many-core architecture with hardware concurrency management. MGSim is furthermore written mostly in C++ and uses object classes to represent chip components. It is optimized for architecture models that can be described as process networks.Comment: 33 pages, 22 figures, 4 listings, 2 table
    • …
    corecore