40 research outputs found

    Solving Problems in a DistributedWay in Membrane Computing: dP Systems

    Full text link

    Session Coalgebras: A Coalgebraic View on Regular and Context-Free Session Types

    Get PDF
    Compositional methods are central to the verification of software systems. For concurrent and communicating systems, compositional techniques based on behavioural type systems have received much attention. By abstracting communication protocols as types, these type systems can statically check that channels in a program interact following a certain protocol—whether messages are exchanged in the intended order. In this article, we put on our coalgebraic spectacles to investigate session types, a widely studied class of behavioural type systems. We provide a syntax-free description of session-based concurrency as states of coalgebras. As a result, we rediscover type equivalence, duality, and subtyping relations in terms of canonical coinductive presentations. In turn, this coinductive presentation enables us to derive a decidable type system with subtyping for the π-calculus, in which the states of a coalgebra will serve as channel protocols. Going full circle, we exhibit a coalgebra structure on an existing session type system, and show that the relations and type system resulting from our coalgebraic perspective coincide with existing ones. We further apply to session coalgebras the coalgebraic approach to regular languages via the so-called rational fixed point, inspired by the trinity of automata, regular languages, and regular expressions with session coalgebras, rational fixed point, and session types, respectively. We establish a suitable restriction on session coalgebras that determines a similar trinity, and reveals the mismatch between usual session types and our syntax-free coalgebraic approach. Furthermore, we extend our coalgebraic approach to account for context-free session types, by equipping session coalgebras with a stack

    Session Coalgebras: A Coalgebraic View on Session Types and Communication Protocols

    Get PDF
    Compositional methods are central to the development and verification of software systems. They allow to break down large systems into smaller components, while enabling reasoning about the behaviour of the composed system. For concurrent and communicating systems, compositional techniques based on behavioural type systems have received much attention. By abstracting communication protocols as types, these type systems can statically check that programs interact with channels according to a certain protocol, whether the intended messages are exchanged in a certain order. In this paper, we put on our coalgebraic spectacles to investigate session types, a widely studied class of behavioural type systems. We provide a syntax-free description of session-based concurrency as states of coalgebras. As a result, we rediscover type equivalence, duality, and subtyping relations in terms of canonical coinductive presentations. In turn, this coinductive presentation makes it possible to elegantly derive a decidable type system with subtyping for π\pi-calculus processes, in which the states of a coalgebra will serve as channel protocols. Going full circle, we exhibit a coalgebra structure on an existing session type system, and show that the relations and type system resulting from our coalgebraic perspective agree with the existing ones.Comment: 36 pages, submitte

    WTTM 2012, The Fourth Workshop on the Theory of Transactional Memory

    Get PDF
    Abstract In conjunction with PODC 2012, the TransForm project (Marie Curie Initial Training Network) and EuroTM (COST Action IC1001) supported the 4th edition of the Workshop on the Theory of Transactional Memory (WTTM 2012). The objective of WTTM was to discuss new theoretical challenges and recent achievements in the area of transactional computing. The workshop took place on July 19, 2012, in Madeira, Portugal. This year's WTTM was a milestone event for two reasons. First, because the same year, the two seminal articles on hardware and software transactional memories Transactional memory is a concurrency control mechanism for synchronizing concurrent accesses to shared memory by different threads. It has been proposed as an alternative to lock-based synchronization to simplify concurrent programming while exhibiting good performance. The sequential code is encapsulated in transactions, which are sequences of accesses to shared or local variables that should be executed atomically by a single thread. A transaction ends either by committing, in which case all of its updates take effect, or by aborting, in which case, all its updates are discarded and never become visible to other transactions. Consistency criteria Since the introduction of the transactional memory paradigm, several consistency criteria have being proposed to capture its correct behavior. Some consistency criteria have been inherited from &0 6,*&7 1HZV 'HFHPEHU YRO QR the database field (e.g., serializability, strict serializability), others have been proposed to extend these latter to take into account aborted transactions, e.g., opacity, virtual world consistency; some others have been proposed to define the correct behavior when transactions have to be synchronized with non transactional code, e.g., strong atomicity. Among all the criteria, opacity, originally proposed by Guerraoui and Kapalka Victor Luchangco presented his joint work with Mohsen Lesani and Mark Moir provocatively titled "Putting opacity in its place" in which he presented the TMS1 and TMS2 consistency conditions and clarified their relationship with the prefix-closed definition of opacity. Broadly, these conditions ensure that no transaction observes the partial effects of any other transaction in any execution of the STM without having to force a total-order on transactions that participate in the execution. In particular, TMS1 is defined for any object with a well-defined sequential specification while TMS2 is specific for read-write registers. They further show using IO Automata While opacity defines the correctness of transactional memories when shared variables are accessed only inside transactions, understanding the interaction between transactions and locks or between accesses to shared variables in and outside transactions has been a major question. This is motivated by the fact that the code written to work in a transactional system may need to interact with legacy code where locks have been used for synchronization. It has been shown that replacing a lock with a transaction does not always ensure the same behavior. Srivatsan Ravi presented his joint work with Vincent Gramoli and Petr Kuznetsov on the locallyserializable linearizability (ls-linearizability) consistency criterion that applies to both transactionbased and lock-based programs, thus allowing to compare the amount of concurrency of a concurrent program Stephan Diestelhorst presented his preliminary work with Martin Pohlack, "Safely Accessing Timestamps in Transactions". He presented scenarios in which the access to the CPU timestamp counter inside transactions could lead to unexpected behaviors. In particular, this counter is not a transactional variable, so reading it inside a transaction can lead to a violation of the single lock atomicity semantics: multiple accesses to this counter within transactions may lead to a different result than multiple accesses within a critical section. In this talk, a solution to prevent several transactions from accessing the timestamp concurrently was also sketched. Annette Bienusa presented the definition of snapshot trace to simplify the reasoning on the correctness of TMs that ensure snapshot isolation. This is a joint work with Peter Thiemann Finally, Faith Ellen stated that despite the fact that a rich set of consistency criteria have been proposed there is no agreement on the way the semantics of a transactional memory has to be defined: operationally &0 6,*&7 1HZV 'HFHPEHU YRO QR Data structures for transactional computing Eliot Moss' talk intended to explore how the availability of transactions as a programming construct might impact the design of data types. He gave multiple examples on how to design data types having in mind that these types are going to be used in transactions. In a similar direction, Maurice Herlihy considered data types that support high-level methods and their inverse. As an example a set of elements supports the method to add an element, add(x), and the method to remove an element from the set remove(x). For these kind of data types, he discussed the possibility to apply a technique called transactional boosting which provides a modular way to make highly concurrent thread-safe data structures transactional. He suggested to distinguish the transaction-level synchronization with the thread-level synchronization. In particular, to synchronize the access to a linearizable object, non-commutative method calls have to be executed serially (e.g., add(x) and remove(x)). Two method calls are commutative if they can be applied in any order and the final state of the object does not change. For example, add(x) and remove(x) do not commute, while add(x) and add(y) commute. Since methods have inverses, recovery can be done at the granularity of methods. This technique exploits the object semantics to synchronize concurrent accesses to the object. This is expected to be more efficient than STM implementations where consistency is guaranteed by detecting read/write conflicts. Performance Improving the efficiency of TMs has been a key problem for the last few years. In fact, in order for transactional memory to be accepted as a candidate to replace locks, we need to show that it has performance comparable to these latter. In her talk Faith Ellen summarized the theoretical results on the efficiency of TMs. She stated that efficiency has been considered through three axes: properties that state under which circumstances aborts have to be avoided (permissiveness Mykhailo Laremko discussed how to apply known techniques (e.g., combining) to boost the performance of existing STM systems that have a central point of synchronization. In particular, in his joint work with Panagiota Fatourou, Eleftherios Kosmas and Giorgos E. Papadakis, they augment the NOrec transactional memory by combining and replacing the single global lock with a set of locks. They provide preliminary simulation results to compare NOrec and its augmented versions, showing that these latter perform better. Nuno Diegues with João Cachopo [6] study how to extend a transactional memory to support nested transactions efficiently. The difficulty is to take into account the constraints imposed by the baseline algorithm. To investigate different directions in the design space (lazy versus eager conflict detection, multiversion versus single version etc), they consider the following transactional memories: JVSTM[9], NesTM [3] and PNSTM 'HFHPEHU YRO QR Diegues shows that PNSTM's throughput is not affected by parallel nesting, while it is the case for the throughput of JVSTM and NesTM. In particular, NesTM shows the greater degradation of performance w.r.t. the depth of the nesting. Legacy code and hardware transactional memory The keynote by Nir Shavit was about his joint work with Yehuda Afek and Alex Matveev on "Pessimistic transactional lock-elision (PLE)" Distributed transactional memory Distributed transactional memory is the implementation of the transactional memory paradigm in a networked environment where processes communicate by exchanging messages. Differently from transactional memory for multicore machines, the networked environment needs to take into account the non negligible communication delays. To support local accesses to shared objects, distributed transactional memories usually rely on replication. Pawel T. Wojciechowski presented his joint work with Jan Konczak. They consider the problem of recovering the state of the shared data after some node crashes. This requests to write data into stable storage. Their goal was to minimize writes to stable storage, which are slow, or to do them in parallel with the execution of transactions. He presented a crash-recovery model for distributed transactional memory which is based on deferred update replication relying on atomic broadcast. Their model takes into account the tradeoff between performance and fault-tolerance. See Sebastiano Peluso claimed that efficient replication schemes for distributed transactional memory have to follow three design principles: partial replication, genuineness to ensure scalability and support for wait-free read-only transactions. According to genuineness the only nodes involved in the execution of a transaction are ones that maintain a replica of an object accessed by the transaction. He claimed that genuineness is a fundamental property for the scalability of distributed transactional memory. This is a joint work with Paolo Romano and Francesco Quaglia &0 6,*&7 1HZV 'HFHPEHU YRO QR Conclusion While transactional memory has become a practical technology integrated in the hardware of the IBM BlueGene/Q supercomputer and the upcoming Intel Haswell processors, the theory of transactional memory still misses good models of computation, good complexity measures, agreement on the right definitions, identification of fundamental and useful algorithmic questions, innovative algorithm designs and lower bounds on problems. Upcoming challenges will likely include the design of new transactional algorithms that exploit the low-overhead of low-level instructions on the one hand, and the concurrency of high-level data types on the other hand. For further information the abstracts and slides of the talks can be found at http://sydney.edu.au/engineering/it/˜gramoli/events/wttm4. Acknowledgements We are grateful to the speakers, to the program committee members of WTTM 2012 for their help in reviewing this year's submissions and to Panagiota Fatourou for her help in the organization of the event. We would like to thank Srivatsan Ravi and Mykhailo Laremko for sharing their notes on the talks of the workshop

    Session coalgebras: A coalgebraic view on session types and communication protocols

    Get PDF
    Compositional methods are central to the development and verification of software systems. They allow breaking down large systems into smaller components, while enabling reasoning about the behaviour of the composed system. For concurrent and communicating systems, compositional techniques based on behavioural type systems have received much attention. By abstracting communication protocols as types, these type systems can statically check that programs interact with channels according to a certain protocol, whether the intended messages are exchanged in a certain order. In this paper, we put on our coalgebraic spectacles to investigate session types, a widely studied class of behavioural type systems. We provide a syntax-free description of session-based concurrency as states of coalgebras. As a result, we rediscover type equivalence, duality, and subtyping rela

    On Distributed Solution to SAT by Membrane Computing

    Get PDF
    Tissue P systems with evolutional communication rules and cell division (TPec, for short) are a class of bio-inspired parallel computational models, which can solve NP-complete problems in a feasible time. In this work, a variant of TPec, called kk-distributed tissue P systems with evolutional communication and cell division (k-ΔTPeck\text{-}\Delta_{TP_{ec}}, for short) is proposed. A uniform solution to the SAT problem by k-ΔTPeck\text{-}\Delta_{TP_{ec}} under balanced fixed-partition is presented. The solution provides not only the precise satisfying truth assignments for all Boolean formulas, but also a precise amount of possible such satisfying truth assignments. It is shown that the communication resource for one-way and two-way uniform kk-P protocols are increased with respect to kk; while a single communication is shown to be possible for bi-directional uniform kk-P protocols for any kk. We further show that if the number of clauses is at least equal to the square of the number of variables of the given boolean formula, then k-ΔTPeck\text{-}\Delta_{TP_{ec}} for solving the SAT problem are more efficient than TPec as show in \cite{bosheng2017}; if the number of clauses is equal to the number of variables, then k-ΔTPeck\text{-}\Delta_{TP_{ec}} for solving the SAT problem work no much faster than TPec

    Near-linear convergence of the Random Osborne algorithm for Matrix Balancing

    Full text link
    We revisit Matrix Balancing, a pre-conditioning task used ubiquitously for computing eigenvalues and matrix exponentials. Since 1960, Osborne's algorithm has been the practitioners' algorithm of choice and is now implemented in most numerical software packages. However, its theoretical properties are not well understood. Here, we show that a simple random variant of Osborne's algorithm converges in near-linear time in the input sparsity. Specifically, it balances KR0n×nK\in\mathbb{R}_{\geq 0}^{n\times n} after O(mϵ2logκ)O(m\epsilon^{-2}\log\kappa) arithmetic operations, where mm is the number of nonzeros in KK, ϵ\epsilon is the 1\ell_1 accuracy, and κ=ijKij/(minij:Kij0Kij)\kappa=\sum_{ij}K_{ij}/(\min_{ij:K_{ij}\neq 0}K_{ij}) measures the conditioning of KK. Previous work had established near-linear runtimes either only for 2\ell_2 accuracy (a weaker criterion which is less relevant for applications), or through an entirely different algorithm based on (currently) impractical Laplacian solvers. We further show that if the graph with adjacency matrix KK is moderately connected--e.g., if KK has at least one positive row/column pair--then Osborne's algorithm initially converges exponentially fast, yielding an improved runtime O(mϵ1logκ)O(m\epsilon^{-1}\log\kappa). We also address numerical precision by showing that these runtime bounds still hold when using O(log(nκ/ϵ))O(\log(n\kappa/\epsilon))-bit numbers. Our results are established through an intuitive potential argument that leverages a convex optimization perspective of Osborne's algorithm, and relates the per-iteration progress to the current imbalance as measured in Hellinger distance. Unlike previous analyses, we critically exploit log-convexity of the potential. Our analysis extends to other variants of Osborne's algorithm: along the way, we establish significantly improved runtime bounds for cyclic, greedy, and parallelized variants.Comment: v2: Fixed minor typos. Modified title for clarity. Corrected statement of Thm 6.1; this does not affect our main result

    Time-space complexity of quantum search algorithms in symmetric cryptanalysis: applying to AES and SHA-2

    Get PDF
    Performance of cryptanalytic quantum search algorithms is mainly inferred from query complexity which hides overhead induced by an implementation. To shed light on quantitative complexity analysis removing hidden factors, we provide a framework for estimating time-space complexity, with carefully accounting for characteristics of target cryptographic functions. Processor and circuit parallelization methods are taken into account, resulting in the time-space trade-off curves in terms of depth and qubit. The method guides howto rank different circuit designs in order of their efficiency. The framework is applied to representative cryptosystems NIST referred to as a guideline for security parameters, reassessing the security strengths of AES and SHA-2
    corecore