99,295 research outputs found

    Models for Deterministic Execution of Real-Time Multiprocessor Applications

    No full text
    International audienceWith the proliferation of multi-cores in embedded real-time systems, many industrial applications are being (re-)targeted to multiprocessor platforms. However, exactly reproducible data values at the outputs as function of the data and timing of the inputs is less trivial to realize in multiprocessors, while it can be imperative for various practical reasons. Also for parallel platforms it is harder to evaluate the task utilization and ensure schedulability, especially for end-to-end communication timing constraints and aperiodic events. Based upon reactive system extensions of Kahn process networks, we propose a model of computation that employs synchronous events and event priority relations to ensure deterministic execution. For this model, we propose an online scheduling policy and establish a link to a well-developed scheduling theory. We also implement this model in publicly available prototype tools and evaluate them on state-of-the art multi-core hardware, with a streaming benchmark and an avionics case study

    Taming computational complexity: efficient and parallel SimRank optimizations on undirected graphs

    Get PDF
    SimRank has been considered as one of the promising link-based ranking algorithms to evaluate similarities of web documents in many modern search engines. In this paper, we investigate the optimization problem of SimRank similarity computation on undirected web graphs. We first present a novel algorithm to estimate the SimRank between vertices in O(n3+ Kn2) time, where n is the number of vertices, and K is the number of iterations. In comparison, the most efficient implementation of SimRank algorithm in [1] takes O(K n3 ) time in the worst case. To efficiently handle large-scale computations, we also propose a parallel implementation of the SimRank algorithm on multiple processors. The experimental evaluations on both synthetic and real-life data sets demonstrate the better computational time and parallel efficiency of our proposed techniques

    Transductions Computed by One-Dimensional Cellular Automata

    Full text link
    Cellular automata are investigated towards their ability to compute transductions, that is, to transform inputs into outputs. The families of transductions computed are classified with regard to the time allowed to process the input and to compute the output. Since there is a particular interest in fast transductions, we mainly focus on the time complexities real time and linear time. We first investigate the computational capabilities of cellular automaton transducers by comparing them to iterative array transducers, that is, we compare parallel input/output mode to sequential input/output mode of massively parallel machines. By direct simulations, it turns out that the parallel mode is not weaker than the sequential one. Moreover, with regard to certain time complexities cellular automaton transducers are even more powerful than iterative arrays. In the second part of the paper, the model in question is compared with the sequential devices single-valued finite state transducers and deterministic pushdown transducers. It turns out that both models can be simulated by cellular automaton transducers faster than by iterative array transducers.Comment: In Proceedings AUTOMATA&JAC 2012, arXiv:1208.249

    Anytime system level verification via parallel random exhaustive hardware in the loop simulation

    Get PDF
    System level verification of cyber-physical systems has the goal of verifying that the whole (i.e., software + hardware) system meets the given specifications. Model checkers for hybrid systems cannot handle system level verification of actual systems. Thus, Hardware In the Loop Simulation (HILS) is currently the main workhorse for system level verification. By using model checking driven exhaustive HILS, System Level Formal Verification (SLFV) can be effectively carried out for actual systems. We present a parallel random exhaustive HILS based model checker for hybrid systems that, by simulating all operational scenarios exactly once in a uniform random order, is able to provide, at any time during the verification process, an upper bound to the probability that the System Under Verification exhibits an error in a yet-to-be-simulated scenario (Omission Probability). We show effectiveness of the proposed approach by presenting experimental results on SLFV of the Inverted Pendulum on a Cart and the Fuel Control System examples in the Simulink distribution. To the best of our knowledge, no previously published model checker can exhaustively verify hybrid systems of such a size and provide at any time an upper bound to the Omission Probability

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques

    Finite automata with advice tapes

    Full text link
    We define a model of advised computation by finite automata where the advice is provided on a separate tape. We consider several variants of the model where the advice is deterministic or randomized, the input tape head is allowed real-time, one-way, or two-way access, and the automaton is classical or quantum. We prove several separation results among these variants, demonstrate an infinite hierarchy of language classes recognized by automata with increasing advice lengths, and establish the relationships between this and the previously studied ways of providing advice to finite automata.Comment: Corrected typo

    Computation in generalised probabilistic theories

    Get PDF
    From the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that BQP is in AWPP. This work investigates limits on computational power that are imposed by physical principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusion still holds? We show that given only an assumption of tomographic locality (roughly, that multipartite states can be characterised by local measurements), efficient computations are contained in AWPP. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class is equal to PP. Given only the assumption of tomographic locality, the inclusion in PP still holds for post-selected computation in general theories. Thus in a world with post-selection, quantum theory is optimal for computation in the space of all general theories. We then consider if relativised complexity results can be obtained for general theories. It is not clear how to define a sensible notion of an oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a `classical oracle'. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption and tomographic locality does not include NP.Comment: 14+9 pages. Comments welcom
    corecore