22,290 research outputs found
Consistent and efficient output-streams management in optimistic simulation platforms
Optimistic synchronization is considered an effective means for supporting Parallel Discrete Event Simulations. It relies on a speculative approach, where concurrent processes execute simulation events regardless of their safety, and consistency is ensured via proper rollback mechanisms, upon the a-posteriori detection of causal inconsistencies along the events' execution path. Interactions with the outside world (e.g. generation of output streams) are a well-known problem for rollback-based systems, since the outside world may have no notion of rollback. In this context, approaches for allowing the simulation modeler to generate consistent output rely on either the usage of ad-hoc APIs (which must be provided by the underlying simulation kernel) or temporary suspension of processing activities in order to wait for the final outcome (commit/rollback) associated with a speculatively-produced output. In this paper we present design indications and a reference implementation for an output streams' management subsystem which allows the simulation-model writer to rely on standard output-generation libraries (e.g. stdio) within code blocks associated with event processing. Further, the subsystem ensures that the produced output is consistent, namely associated with events that are eventually committed, and system-wide ordered along the simulation time axis. The above features jointly provide the illusion of a classical (simple to deal with) sequential programming model, which spares the developer from being aware that the simulation program is run concurrently and speculatively. We also show, via an experimental study, how the design/development optimizations we present lead to limited overhead, giving rise to the situation where the simulation run would have been carried out with near-to-zero or reduced output management cost. At the same time, the delay for materializing the output stream (making it available for any type of audit activity) is shown to be fairly limited and constant, especially for good mixtures of I/O-bound vs CPU-bound behaviors at the application level. Further, the whole output streams' management subsystem has been designed in order to provide scalability for I/O management on clusters. © 2013 ACM
The End of a Myth: Distributed Transactions Can Scale
The common wisdom is that distributed transactions do not scale. But what if
distributed transactions could be made scalable using the next generation of
networks and a redesign of distributed databases? There would be no need for
developers anymore to worry about co-partitioning schemes to achieve decent
performance. Application development would become easier as data placement
would no longer determine how scalable an application is. Hardware provisioning
would be simplified as the system administrator can expect a linear scale-out
when adding more machines rather than some complex sub-linear function, which
is highly application specific.
In this paper, we present the design of our novel scalable database system
NAM-DB and show that distributed transactions with the very common Snapshot
Isolation guarantee can indeed scale using the next generation of RDMA-enabled
network technology without any inherent bottlenecks. Our experiments with the
TPC-C benchmark show that our system scales linearly to over 6.5 million
new-order (14.5 million total) distributed transactions per second on 56
machines.Comment: 12 page
Sealed containers in Z
Physical means of securing information, such as sealed envelopes and scratch cards, can be used to achieve cryptographic objectives. Reasoning about this has so far been informal.
We give a model of distinguishable sealed envelopes in Z, exploring design decisions and further analysis and development of such models
Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency
Persistent memory provides high-performance data persistence at main memory.
Memory writes need to be performed in strict order to satisfy storage
consistency requirements and enable correct recovery from system crashes.
Unfortunately, adhering to such a strict order significantly degrades system
performance and persistent memory endurance. This paper introduces a new
mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering
requirements at significantly lower performance and endurance loss. LOC
consists of two key techniques. First, Eager Commit eliminates the need to
perform a persistent commit record write within a transaction. We do so by
ensuring that we can determine the status of all committed transactions during
recovery by storing necessary metadata information statically with blocks of
data written to memory. Second, Speculative Persistence relaxes the write
ordering between transactions by allowing writes to be speculatively written to
persistent memory. A speculative write is made visible to software only after
its associated transaction commits. To enable this, our mechanism supports the
tracking of committed transaction ID and multi-versioning in the CPU cache. Our
evaluations show that LOC reduces the average performance overhead of memory
persistence from 66.9% to 34.9% and the memory write traffic overhead from
17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and
Distributed System
MGSim - Simulation tools for multi-core processor architectures
MGSim is an open source discrete event simulator for on-chip hardware
components, developed at the University of Amsterdam. It is intended to be a
research and teaching vehicle to study the fine-grained hardware/software
interactions on many-core and hardware multithreaded processors. It includes
support for core models with different instruction sets, a configurable
multi-core interconnect, multiple configurable cache and memory models, a
dedicated I/O subsystem, and comprehensive monitoring and interaction
facilities. The default model configuration shipped with MGSim implements
Microgrids, a many-core architecture with hardware concurrency management.
MGSim is furthermore written mostly in C++ and uses object classes to represent
chip components. It is optimized for architecture models that can be described
as process networks.Comment: 33 pages, 22 figures, 4 listings, 2 table
Implementing Distributed Controllers for Systems with Priorities
Implementing a component-based system in a distributed way so that it ensures
some global constraints is a challenging problem. We consider here abstract
specifications consisting of a composition of components and a controller given
in the form of a set of interactions and a priority order amongst them. In the
context of distributed systems, such a controller must be executed in a
distributed fashion while still respecting the global constraints imposed by
interactions and priorities.
We present in this paper an implementation of an algorithm that allows a
distributed execution of systems with (binary) interactions and priorities. We
also present a comprehensive simulation analysis that shows how sensitive to
changes our algorithm is, in particular changes related to the degree of
conflict in the system.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499
Is It Fair to Treat China as a Christmas Tree to Hang Everybody’s Complaints? Putting its Own Energy Saving into Perspective
China had been the world’s second largest carbon emitter for years. However, recent studies show that China had overtaken the U.S. as the world’s largest emitter in 2007. This has put China on the spotlight, just at a time when the world community starts negotiating a post-Kyoto climate regime under the Bali roadmap. China seems to become such a Christmas tree on which everybody can hang his/her complaints. This paper first discusses whether such a critics is fair by examining China’s own efforts towards energy saving, the widespread use of renewable energy and participation in clean development mechanism. Next, the paper puts carbon reductions of China’s unilateral actions into perspective by examining whether the estimated greenhouse gas emission reduction from meeting the country’s national energy saving goal is achieved from China’s unilateral actions or mainly with support from the clean development mechanism projects. Then the paper discusses how far developing country commitments can go in an immediate post-2012 climate regime, thus pointing out the direction and focus of future international climate negotiations. Finally, emphasizing that China needs to act as a large and responsible developing country and take due responsibilities and to set a good example to the majority of developing countries, the paper articulates what can be expected from China to illustrate that China can be a good partner in combating global climate change.Energy Saving, Renewable Energy, Post-Kyoto Climate Negotiations, Clean Development Mechanism, China, USA
- …