2,211 research outputs found
Productive Development of Scalable Network Functions with NFork
Despite decades of research, developing correct and scalable concurrent
programs is still challenging. Network functions (NFs) are not an exception.
This paper presents NFork, a system that helps NF domain experts to
productively develop concurrent NFs by abstracting away concurrency from
developers. The key scheme behind NFork's design is to exploit NF
characteristics to overcome the limitations of prior work on concurrency
programming. Developers write NFs as sequential programs, and during runtime,
NFork performs transparent parallelization by processing packets in different
cores. Exploiting NF characteristics, NFork leverages transactional memory and
develops efficient concurrent data structures to achieve scalability and
guarantee the absence of concurrency bugs.
Since NFork manages concurrency, it further provides (i) a profiler that
reveals the root causes of scalability bottlenecks inherent to the NF's
semantics and (ii) actionable recipes for developers to mitigate these root
causes by relaxing the NF's semantics. We show that NFs developed with NFork
achieve competitive scalability with those in Cisco VPP [16], and NFork's
profiler and recipes can effectively aid developers in optimizing NF
scalability.Comment: 16 pages, 8 figure
A Peer-to-Peer Middleware Framework for Resilient Persistent Programming
The persistent programming systems of the 1980s offered a programming model
that integrated computation and long-term storage. In these systems, reliable
applications could be engineered without requiring the programmer to write
translation code to manage the transfer of data to and from non-volatile
storage. More importantly, it simplified the programmer's conceptual model of
an application, and avoided the many coherency problems that result from
multiple cached copies of the same information. Although technically
innovative, persistent languages were not widely adopted, perhaps due in part
to their closed-world model. Each persistent store was located on a single
host, and there were no flexible mechanisms for communication or transfer of
data between separate stores. Here we re-open the work on persistence and
combine it with modern peer-to-peer techniques in order to provide support for
orthogonal persistence in resilient and potentially long-running distributed
applications. Our vision is of an infrastructure within which an application
can be developed and distributed with minimal modification, whereupon the
application becomes resilient to certain failure modes. If a node, or the
connection to it, fails during execution of the application, the objects are
re-instantiated from distributed replicas, without their reference holders
being aware of the failure. Furthermore, we believe that this can be achieved
within a spectrum of application programmer intervention, ranging from minimal
to totally prescriptive, as desired. The same mechanisms encompass an
orthogonally persistent programming model. We outline our approach to
implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200
Exploiting method semantics in client cache consistency protocols for object-oriented databases
PhD ThesisData-shipping systems are commonly used in client-server object-oriented databases. This is in-
tended to utilise clients' resources and improve scalability by allowing clients to run transactions
locally after fetching the required database items from the database server. A consequence of this
is that a database item can be cached at more than one client. This therefore raises issues regarding
client cache consistency and concurrency control. A number of client cache consistency protocols
have been studied, and some approaches to concurrency control for object-oriented datahases have
been proposed. Existing client consistency protocols, however, do not consider method semantics
in concurrency control. This study proposes a client cache consistency protocol where method se-
mantic can be exploited in concurrency control. It identifies issues regarding the use of method
semantics for the protocol and investigates the performance using simulation. The performance re-
sults show that this can result in performance gains when compared to existing protocols. The study
also shows the potential benefits of asynchronous version of the protoco
A true concurrent model of smart contracts executions
The development of blockchain technologies has enabled the trustless
execution of so-called smart contracts, i.e. programs that regulate the
exchange of assets (e.g., cryptocurrency) between users. In a decentralized
blockchain, the state of smart contracts is collaboratively maintained by a
peer-to-peer network of mutually untrusted nodes, which collect from users a
set of transactions (representing the required actions on contracts), and
execute them in some order. Once this sequence of transactions is appended to
the blockchain, the other nodes validate it, re-executing the transactions in
the same order. The serial execution of transactions does not take advantage of
the multi-core architecture of modern processors, so contributing to limit the
throughput. In this paper we propose a true concurrent model of smart contract
execution. Based on this, we show how static analysis of smart contracts can be
exploited to parallelize the execution of transactions.Comment: Full version of the paper presented at COORDINATION 202
Lock-free Concurrent Data Structures
Concurrent data structures are the data sharing side of parallel programming.
Data structures give the means to the program to store data, but also provide
operations to the program to access and manipulate these data. These operations
are implemented through algorithms that have to be efficient. In the sequential
setting, data structures are crucially important for the performance of the
respective computation. In the parallel programming setting, their importance
becomes more crucial because of the increased use of data and resource sharing
for utilizing parallelism.
The first and main goal of this chapter is to provide a sufficient background
and intuition to help the interested reader to navigate in the complex research
area of lock-free data structures. The second goal is to offer the programmer
familiarity to the subject that will allow her to use truly concurrent methods.Comment: To appear in "Programming Multi-core and Many-core Computing
Systems", eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and
Distributed Computin
Efficient Concurrent Execution of Smart Contracts in Blockchains using Object-based Transactional Memory
This paper proposes an efficient framework to execute Smart Contract
Transactions (SCTs) concurrently based on object semantics, using optimistic
Single-Version Object-based Software Transactional Memory Systems (SVOSTMs) and
Multi-Version OSTMs (MVOSTMs). In our framework, a multi-threaded miner
constructs a Block Graph (BG), capturing the object-conflicts relations between
SCTs, and stores it in the block. Later, validators re-execute the same SCTs
concurrently and deterministically relying on this BG.
A malicious miner can modify the BG to harm the blockchain, e.g., to cause
double-spending. To identify malicious miners, we propose Smart Multi-threaded
Validator (SMV). Experimental analysis shows that the proposed multi-threaded
miner and validator achieve significant performance gains over state-of-the-art
SCT execution framework.Comment: 49 pages, 26 figures, 11 table
Maintaining consistency in distributed systems
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability
Design and Implementation of a Distributed Middleware for Parallel Execution of Legacy Enterprise Applications
A typical enterprise uses a local area network of computers to perform its
business. During the off-working hours, the computational capacities of these
networked computers are underused or unused. In order to utilize this
computational capacity an application has to be recoded to exploit concurrency
inherent in a computation which is clearly not possible for legacy applications
without any source code. This thesis presents the design an implementation of a
distributed middleware which can automatically execute a legacy application on
multiple networked computers by parallelizing it. This middleware runs multiple
copies of the binary executable code in parallel on different hosts in the
network. It wraps up the binary executable code of the legacy application in
order to capture the kernel level data access system calls and perform them
distributively over multiple computers in a safe and conflict free manner. The
middleware also incorporates a dynamic scheduling technique to execute the
target application in minimum time by scavenging the available CPU cycles of
the hosts in the network. This dynamic scheduling also supports the CPU
availability of the hosts to change over time and properly reschedule the
replicas performing the computation to minimize the execution time. A prototype
implementation of this middleware has been developed as a proof of concept of
the design. This implementation has been evaluated with a few typical case
studies and the test results confirm that the middleware works as expected
- …