70 research outputs found

    Parallel Recursive State Compression for Free

    Get PDF
    This paper focuses on reducing memory usage in enumerative model checking, while maintaining the multi-core scalability obtained in earlier work. We present a tree-based multi-core compression method, which works by leveraging sharing among sub-vectors of state vectors. An algorithmic analysis of both worst-case and optimal compression ratios shows the potential to compress even large states to a small constant on average (8 bytes). Our experiments demonstrate that this holds up in practice: the median compression ratio of 279 measured experiments is within 17% of the optimum for tree compression, and five times better than the median compression ratio of SPIN's COLLAPSE compression. Our algorithms are implemented in the LTSmin tool, and our experiments show that for model checking, multi-core tree compression pays its own way: it comes virtually without overhead compared to the fastest hash table-based methods.Comment: 19 page

    Hull Consistency Under Monotonicity

    Get PDF
    International audienceWe prove that hull consistency for a system of equations or inequalities can be achieved in polynomial time providing that the underlying functions are monotone with respect to each variable. This result holds including when variables have multiple occurrences in the expressions of the functions, which is usually a pitfall for interval-based contractors. For a given constraint, an optimal contractor can thus be enforced quickly under monotonicity and the practical significance of this theoretical result is illustrated on a simple example

    THE VLSI IMPLEMENTATION OF THE SIGMA ARCHITECTURE

    No full text
    A description is given of a parallel computer architecture called SIGMA and its implementation as a full custom design in VLSI technology. The architecture is highly parallel, consisting of many simple processing elements heavily interconnected. The processing elements perform threshold computations on thousands of inputs. This architecture was inspired by research under the "neural network" banner and retains the highly interconnected nature of such systems. However, it differs from them in some key areas. The SIGMA architecture is digital, it provides greater functionality with respect to the type of threshold comparison done, the connection weights remain static for the duration of a problem, and its processing is deterministic. Communication between units is in single bit values which are heavily multiplexed to reduce the amount of physical interconnect and pinout.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]

    THE VLSI IMPLEMENTATION OF THE SIGMA ARCHITECTURE

    No full text
    A description is given of a parallel computer architecture called SIGMA and its implementation as a full custom design in VLSI technology. The architecture is highly parallel, consisting of many simple processing elements heavily interconnected. The processing elements perform threshold computations on thousands of inputs. This architecture was inspired by research under the "neural network" banner and retains the highly interconnected nature of such systems. However, it differs from them in some key areas. The SIGMA architecture is digital, it provides greater functionality with respect to the type of threshold comparison done, the connection weights remain static for the duration of a problem, and its processing is deterministic. Communication between units is in single bit values which are heavily multiplexed to reduce the amount of physical interconnect and pinout.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]

    AP and MN-centric Mobility Prediction: A Comparative Study Based On Wireless Traces

    Full text link
    peer reviewedThe mobility prediction problem is defined as guessing a mobile node's next access point as it moves through a wireless network. Those predictions help take proactive measures in order to guarantee a given quality of service. Prediction agents can be divided into two main categories: agents related to a specific terminal (responsible for anticipating its own movements) and those related to an access point (which predict the next access point of all the mobiles connected through it). This paper aims at comparing those two schemes using real traces of a large WiFi network. Several observations are made, such as the difficulties encountered to get a reliable trace of mobiles motion, the unexpectedly small difference between both methods in terms of accuracy, and the inadequacy of commonly admitted hypotheses (such as the different motion behaviours between the week-end and the rest of the week)

    Differentiating code from data in x86 binaries

    No full text
    Abstract. Robust, static disassembly is an important part of achieving high coverage for many binary code analyses, such as reverse engineering, malware analysis, reference monitor in-lining, and software fault isolation. However, one of the major difficulties current disassemblers face is differentiating code from data when they are interleaved. This paper presents a machine learning-based disassembly algorithm that segments an x86 binary into subsequences of bytes and then classifies each subsequence as code or data. The algorithm builds a language model from a set of pre-tagged binaries using a statistical data compression technique. It sequentially scans a new binary executable and sets a breaking point at each potential code-to-code and code-to-data/data-to-code transition. The classification of each segment as code or data is based on the minimum cross-entropy. Experimental results are presented to demonstrate the effectiveness of the algorithm

    Global Hull Consistency with Local Search for Continuous Constraint Solving

    No full text
    This paper addresses constraint solving over continuous domains in the context of decision making, and discusses the trade-off between precision in the definition of the solution space and the computational effort required. In alternative to local consistency, which is usually maintained in handling continuous constraints, we discuss maintaining global hull-consistency. Experimental results show that this may be an appropriate choice, achieving acceptable precision with relatively low computational cost. The approach relies on efficient algorithms and the best results are obtained with the integration of a local search procedure within interval constraint propagation

    JADES's IPC kernel for distributing simulation

    No full text
    An implementation of virtual time using Jefferson's Time Warp mechanism is discussed. The context for the implementation is a multi-lingual distributed programming environment with an underlying message passing system, called Jipc, which is accessible to Ada, C, Lisp, Prolog, and Simula programs. The intention is that distributed programs written using Jipc can be simulated using virtual time with only small changes to the original source code. The system is layered so that the different languages and different roll-back mechanisms can be supported
    corecore