261,531 research outputs found

    Analysis of information systems for hydropower operations

    Get PDF
    The operations of hydropower systems were analyzed with emphasis on water resource management, to determine how aerospace derived information system technologies can increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept outlined

    Requirements for the workflow-based support of release management processes in the automotive sector

    Get PDF
    One of the challenges the automotive industry currently has to master is the complexity of the electrical/elctronic system of a car. One key factor for reaching short product development cycles and high quality in this area are well-defined, properly executed test and release processes. In this paper we show why workflow management technology is needed to support these processes and how this support should look like. We further confront these requirements with the features of contemporary workflow technology and discuss which extensions become necessary

    Algorithm Diversity for Resilient Systems

    Full text link
    Diversity can significantly increase the resilience of systems, by reducing the prevalence of shared vulnerabilities and making vulnerabilities harder to exploit. Work on software diversity for security typically creates variants of a program using low-level code transformations. This paper is the first to study algorithm diversity for resilience. We first describe how a method based on high-level invariants and systematic incrementalization can be used to create algorithm variants. Executing multiple variants in parallel and comparing their outputs provides greater resilience than executing one variant. To prevent different parallel schedules from causing variants' behaviors to diverge, we present a synchronized execution algorithm for DistAlgo, an extension of Python for high-level, precise, executable specifications of distributed algorithms. We propose static and dynamic metrics for measuring diversity. An experimental evaluation of algorithm diversity combined with implementation-level diversity for several sequential algorithms and distributed algorithms shows the benefits of algorithm diversity

    Implementing atomic actions in Ada 95

    Get PDF
    Atomic actions are an important dynamic structuring technique that aid the construction of fault-tolerant concurrent systems. Although they were developed some years ago, none of the well-known commercially-available programming languages directly support their use. This paper summarizes software fault tolerance techniques for concurrent systems, evaluates the Ada 95 programming language from the perspective of its support for software fault tolerance, and shows how Ada 95 can be used to implement software fault tolerance techniques. In particular, it shows how packages, protected objects, requeue, exceptions, asynchronous transfer of control, tagged types, and controlled types can be used as building blocks from which to construct atomic actions with forward and backward error recovery, which are resilient to deserter tasks and task abortion

    Flexible Scheduling in Multimedia Kernels: an Overview

    Get PDF
    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more flexible use of the resources. We will show that we can improve the flexibility and efficiency of multimedia kernels. Therefore we introduce Real Time Transactions (RTT) with Deadline Inheritance policies for a small class of scheduling algorithms and we will evaluate these algorithms for use in a multimedia environmen

    Data-driven Modeling and Coordination of Large Process Structures

    Get PDF
    In the engineering domain, the development of complex products (e.g., cars) necessitates the coordination of thousands of (sub-)processes. One of the biggest challenges for process management systems is to support the modeling, monitoring and maintenance of the many interdependencies between these sub-processes. The resulting process structures are large and can be characterized by a strong relationship with the assembly of the product; i.e., the sub-processes to be coordinated can be related to the different product components. So far, sub-process coordination has been mainly accomplished manually, resulting in high efforts and inconsistencies. IT support is required to utilize the information about the product and its structure for deriving, coordinating and maintaining such data-driven process structures. In this paper, we introduce the COREPRO framework for the data-driven modeling of large process structures. The approach reduces modeling efforts significantly and provides mechanisms for maintaining data-driven process structures

    Using graphical models and multi-attribute utility theory for probabilistic uncertainty handling in large systems, with application to nuclear emergency management

    Get PDF
    Although many decision-making problems involve uncertainty, uncertainty handling within large decision support systems (DSSs) is challenging. One domain where uncertainty handling is critical is emergency response management, in particular nuclear emergency response, where decision making takes place in an uncertain, dynamically changing environment. Assimilation and analysis of data can help to reduce these uncertainties, but it is critical to do this in an efficient and defensible way. After briefly introducing the structure of a typical DSS for nuclear emergencies, the paper sets up a theoretical structure that enables a formal Bayesian decision analysis to be performed for environments like this within a DSS architecture. In such probabilistic DSSs many input conditional probability distributions are provided by different sets of experts overseeing different aspects of the emergency. These probabilities are then used by the decision maker (DM) to find her optimal decision. We demonstrate in this paper that unless due care is taken in such a composite framework, coherence and rationality may be compromised in a sense made explicit below. The technology we describe here builds a framework around which Bayesian data updating can be performed in a modular way, ensuring both coherence and efficiency, and provides sufficient unambiguous information to enable the DM to discover her expected utility maximizing policy
    • …
    corecore