49,684 research outputs found

    Building fault detection data to aid diagnostic algorithm creation and performance testing.

    Get PDF
    It is estimated that approximately 4-5% of national energy consumption can be saved through corrections to existing commercial building controls infrastructure and resulting improvements to efficiency. Correspondingly, automated fault detection and diagnostics (FDD) algorithms are designed to identify the presence of operational faults and their root causes. A diversity of techniques is used for FDD spanning physical models, black box, and rule-based approaches. A persistent challenge has been the lack of common datasets and test methods to benchmark their performance accuracy. This article presents a first of its kind public dataset with ground-truth data on the presence and absence of building faults. This dataset spans a range of seasons and operational conditions and encompasses multiple building system types. It contains information on fault severity, as well as data points reflective of the measurements in building control systems that FDD algorithms typically have access to. The data were created using simulation models as well as experimental test facilities, and will be expanded over time

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    A Comprehensive Experimental Comparison of Event Driven and Multi-Threaded Sensor Node Operating Systems

    Get PDF
    The capabilities of a sensor network are strongly influenced by the operating system used on the sensor nodes. In general, two different sensor network operating system types are currently considered: event driven and multi-threaded. It is commonly assumed that event driven operating systems are more suited to sensor networks as they use less memory and processing resources. However, if factors other than resource usage are considered important, a multi-threaded system might be preferred. This paper compares the resource needs of multi-threaded and event driven sensor network operating systems. The resources considered are memory usage and power consumption. Additionally, the event handling capabilities of event driven and multi-threaded operating systems are analyzed and compared. The results presented in this paper show that for a number of application areas a thread-based sensor network operating system is feasible and preferable

    Transparent multi-core speculative parallelization of DES models with event and cross-state dependencies

    Get PDF
    In this article we tackle transparent parallelization of Discrete Event Simulation (DES) models to be run on top of multi-core machines according to speculative schemes. The innovation in our proposal lies in that we consider a more general programming and execution model, compared to the one targeted by state of the art PDES platforms, where the boundaries of the state portion accessible while processing an event at a specific simulation object do not limit access to the actual object state, or to shared global variables. Rather, the simulation object is allowed to access (and alter) the state of any other object, thus causing what we term cross-state dependency. We note that this model exactly complies with typical (easy to manage) sequential-style DES programming, where a (dynamically-allocated) state portion of object A can be accessed by object B in either read or write mode (or both) by, e.g., passing a pointer to B as the payload of a scheduled simulation event. However, while read/write memory accesses performed in the sequential run are always guaranteed to observe (and to give rise to) a consistent snapshot of the state of the simulation model, consistency is not automatically guaranteed in case of parallelization and concurrent execution of simulation objects with cross-state dependencies. We cope with such a consistency issue, and its application-transparent support, in the context of parallel and optimistic executions. This is achieved by introducing an advanced memory management architecture, able to efficiently detect read/write accesses by concurrent objects to whichever object state in an application transparent manner, together with advanced synchronization mechanisms providing the advantage of exploiting parallelism in the underlying multi-core architecture while transparently handling both cross-state and traditional event-based dependencies. Our proposal targets Linux and has been integrated with the ROOT-Sim open source optimistic simulation platform, although its design principles, and most parts of the developed software, are of general relevance. Copyright 2014 ACM

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    Development of an Autonomous Distributed Fault Management Architecture for Spacecraft Formations Involving Proximity Operations

    Get PDF
    CubeSat formations have been identified as a new paradigm for addressing important science questions but are often early adopters of new technologies which carry additional risks. When these missions involve proximity operations, novel fault management architectures are needed to handle these risks. Building on established methods, this paper presents one such architecture that involves a passively safe relative orbit design, interchangeable chief-deputy roles, a formation level fault diagnosis scheme, and an autonomous multi-agent fault handling strategy. The primary focus is to enable the reliable detection of faults occurring on any formation member in real time and the autonomous decision making needed to resolve them while keeping the formation safe from an inter-satellite collision. The NSF-sponsored Virtual Super-resolution Optics with Reconfigurable Swarms (VISORS) mission, which consists of two 6U CubeSats flying in formation 40 meters apart as a distributed solar telescope, is used as a case study for the application of this architecture. The underlying fault analysis, formulation of key elements of the fault detection and response strategies, and the flight software implementation for VISORS are discussed in the paper

    Electrodynamic tether system study

    Get PDF
    The purpose of this program is to define an Electrodynamic Tether System (ETS) that could be erected from the space station and/or platforms to function as an energy storage device. A schematic representation of the ETS concept mounted on the space station is presented. In addition to the hardware design and configuration efforts, studies are also documented involving simulations of the Earth's magnetic fields and the effects this has on overall system efficiency calculations. Also discussed are some preliminary computer simulations of orbit perturbations caused by the cyclic/night operations of the ETS. System cost estimates, an outline for future development testing for the ETS system, and conclusions and recommendations are also provided

    Implementation and Deployment Evaluation of the DMAMAC Protocol for Wireless Sensor Actuator Networks

    Get PDF
    The increased application of wireless technologies including Wireless Sensor Actuator Networks (WSAN) in industry has given rise to a plethora of protocol designs. These designs target metrics ranging from energy efficiency to real-time constraints. Protocol design typically starts with a requirements specification, and continues with analytic and model-based simulation analysis. State-of- the-art network simulators provide extensive physical environment emulation, but still have limitations due to model abstractions. Deployment testing on actual hardware is therefore vital in order to validate implementability and usability in the real environment. The contribution of this article is a deployment testing of the Dual-Mode Adaptive MAC (DMAMAC) protocol. DMAMAC is an energy efficient protocol recently proposed for real-time process control applications and is based on Time Division Multiple Access (TDMA) in conjunction with dual-mode operation. A main challenge in implementing DMAMAC is the use of a dynamic superframe structure. We have successfully implemented the protocol on the Zolertia Z1 platform using TinyOS (2x). Our scenario- based evaluation shows minimal packet loss and smooth mode-switch operation, thus indicating a reliable implementation of the DMAMAC protocol.publishedVersio
    corecore