2,308 research outputs found

    Scheduler-specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    Get PDF
    Observational determinism has been proposed in the literature as a way to ensure confidentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the scheduling policy. Several formal definitions of observational determinism exist, but all of them have shortcomings; for example they accept insecure programs or they reject too many innocuous programs. Besides, the role of schedulers was ignored in all the proposed definitions. A program that is secure under one kind of scheduler might not be secure when executed with a different scheduler. The existing definitions do not ensure that an accepted program behaves securely under the scheduler that is used to deploy the program. Therefore, this paper proposes a new formalization of scheduler-specific observational determinism. It accepts programs that are secure when executed under a specific scheduler. Moreover, it is less restrictive on harmless programs under a particular scheduling policy. In addition, we discuss how compliance with our definition can be verified, using model checking. We use the idea of self-composition and we rephrase the observational determinism property for a single program CC as a temporal logic formula over the program CC executed in parallel with an independent copy of itself. Thus two states reachable during the execution of CC are combined into a reachable program state of the self-composed program. This allows to compare two program executions in a single temporal logic formula. The actual characterization is done in two steps. First we discuss how stuttering equivalence can be characterized as a temporal logic formula. Observational determinism is then expressed in terms of the stuttering equivalence characterization. This results in a conjunction of an LTL and a CTL formula, that are amenable to model checking

    Predicting Scheduling Failures in the Cloud

    Full text link
    Cloud Computing has emerged as a key technology to deliver and manage computing, platform, and software services over the Internet. Task scheduling algorithms play an important role in the efficiency of cloud computing services as they aim to reduce the turnaround time of tasks and improve resource utilization. Several task scheduling algorithms have been proposed in the literature for cloud computing systems, the majority relying on the computational complexity of tasks and the distribution of resources. However, several tasks scheduled following these algorithms still fail because of unforeseen changes in the cloud environments. In this paper, using tasks execution and resource utilization data extracted from the execution traces of real world applications at Google, we explore the possibility of predicting the scheduling outcome of a task using statistical models. If we can successfully predict tasks failures, we may be able to reduce the execution time of jobs by rescheduling failed tasks earlier (i.e., before their actual failing time). Our results show that statistical models can predict task failures with a precision up to 97.4%, and a recall up to 96.2%. We simulate the potential benefits of such predictions using the tool kit GloudSim and found that they can improve the number of finished tasks by up to 40%. We also perform a case study using the Hadoop framework of Amazon Elastic MapReduce (EMR) and the jobs of a gene expression correlations analysis study from breast cancer research. We find that when extending the scheduler of Hadoop with our predictive models, the percentage of failed jobs can be reduced by up to 45%, with an overhead of less than 5 minutes

    A Time-Triggered Constraint-Based Calculus for Avionic Systems

    Full text link
    The Integrated Modular Avionics (IMA) architec- ture and the Time-Triggered Ethernet (TTEthernet) network have emerged as the key components of a typical architecture model for recent civil aircrafts. We propose a real-time constraint-based calculus targeted at the analysis of such concepts of avionic embedded systems. We show our framework at work on the modelisation of both the (IMA) architecture and the TTEthernet network, illustrating their behavior by the well-known Flight Management System (FMS)

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Army-NASA aircrew/aircraft integration program. Phase 5: A3I Man-Machine Integration Design and Analysis System (MIDAS) software concept document

    Get PDF
    This is the Software Concept Document for the Man-machine Integration Design and Analysis System (MIDAS) being developed as part of Phase V of the Army-NASA Aircrew/Aircraft Integration (A3I) Progam. The approach taken in this program since its inception in 1984 is that of incremental development with clearly defined phases. Phase 1 began in 1984 and subsequent phases have progressed at approximately 10-16 month intervals. Each phase of development consists of planning, setting requirements, preliminary design, detailed design, implementation, testing, demonstration and documentation. Phase 5 began with an off-site planning meeting in November, 1990. It is expected that Phase 5 development will be complete and ready for demonstration to invited visitors from industry, government and academia in May, 1992. This document, produced during the preliminary design period of Phase 5, is intended to record the top level design concept for MIDAS as it is currently conceived. This document has two main objectives: (1) to inform interested readers of the goals of the MIDAS Phase 5 development period, and (2) to serve as the initial version of the MIDAS design document which will be continuously updated as the design evolves. Since this document is written fairly early in the design period, many design issues still remain unresolved. Some of the unresolved issues are mentioned later in this document in the sections on specific components. Readers are cautioned that this is not a final design document and that, as the design of MIDAS matures, some of the design ideas recorded in this document will change. The final design will be documented in a detailed design document published after the demonstrations

    Improving Third-Party Relaying for LTE-A: A Realistic Simulation Approach

    Full text link
    In this article we propose solutions to diverse conflicts that result from the deployment of the (still immature) relay node (RN) technology in LTE-A networks. These conflicts and their possible solutions have been observed by implementing standard-compliant relay functionalities on the Vienna simulator. As an original experimental approach, we model realistic RN operation, taking into account that transmitters are not active all the time due to half-duplex RN operation. We have rearranged existing elements in the simulator in a manner that emulates RN behavior, rather than implementing a standalone brand-new component for the simulator. We also study analytically some of the issues observed in the interaction between the network and the RNs, to draw conclusions beyond simulation observation. The main observations of this paper are that: ii) Additional time-varying interference management steps are needed, because the LTE-A standard employs a fixed time division between eNB-RN and RN-UE transmissions (typical relay capacity or throughput research models balance them optimally, which is unrealistic nowadays); iiii) There is a trade-off between the time-division constraints of relaying and multi-user diversity; the stricter the constraints on relay scheduling are, the less flexibility schedulers have to exploit channel variation; and iiiiii) Thee standard contains a variety of parameters for relaying configuration, but not all cases of interest are covered.Comment: 17 one-column pages, 9 figures, accepted for publication in IEEE ICC 2014 MW

    Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

    Get PDF
    The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UP∩coUP\textbf{UP} \cap \textbf{coUP} and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to M\'emoli. As an additional contribution, in this paper we show that M\'emoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary ω\omega-regular properties, expressed, eg., as LTL formulas
    • …
    corecore