1,733 research outputs found
Progressive Transactional Memory in Time and Space
Transactional memory (TM) allows concurrent processes to organize sequences
of operations on shared \emph{data items} into atomic transactions. A
transaction may commit, in which case it appears to have executed sequentially
or it may \emph{abort}, in which case no data item is updated.
The TM programming paradigm emerged as an alternative to conventional
fine-grained locking techniques, offering ease of programming and
compositionality. Though typically themselves implemented using locks, TMs hide
the inherent issues of lock-based synchronization behind a nice transactional
programming interface.
In this paper, we explore inherent time and space complexity of lock-based
TMs, with a focus of the most popular class of \emph{progressive} lock-based
TMs. We derive that a progressive TM might enforce a read-only transaction to
perform a quadratic (in the number of the data items it reads) number of steps
and access a linear number of distinct memory locations, closing the question
of inherent cost of \emph{read validation} in TMs. We then show that the total
number of \emph{remote memory references} (RMRs) that take place in an
execution of a progressive TM in which concurrent processes perform
transactions on a single data item might reach , which
appears to be the first RMR complexity lower bound for transactional memory.Comment: Model of Transactional Memory identical with arXiv:1407.6876,
arXiv:1502.0272
A Change Support Model for Distributed Collaborative Work
Distributed collaborative software development tends to make artifacts and
decisions inconsistent and uncertain. We try to solve this problem by providing
an information repository to reflect the state of works precisely, by managing
the states of artifacts/products made through collaborative work, and the
states of decisions made through communications. In this paper, we propose
models and a tool to construct the artifact-related part of the information
repository, and explain the way to use the repository to resolve
inconsistencies caused by concurrent changes of artifacts. We first show the
model and the tool to generate the dependency relationships among UML model
elements as content of the information repository. Next, we present the model
and the method to generate change support workflows from the information
repository. These workflows give us the way to efficiently modify the
change-related artifacts for each change request. Finally, we define
inconsistency patterns that enable us to be aware of the possibility of
inconsistency occurrences. By combining this mechanism with version control
systems, we can make changes safely. Our models and tool are useful in the
maintenance phase to perform changes safely and efficiently.Comment: 10 pages, 13 figures, 4 table
Scheme 2003: proceedings of the fourth workshop on scheme and functional programming
technical reportThis report contains the papers presented at the Fourth Workshop on Scheme and Functional Programming. The purpose of the Scheme Workshop is to discuss experience with and future developments of the Scheme programming language?including the future of Scheme standardization?as well as general aspects of computer science loosely centered on the general theme of Scheme
Acceleration-as-a-Service: Exploiting Virtualised GPUs for a Financial Application
'How can GPU acceleration be obtained as a service in a cluster?' This
question has become increasingly significant due to the inefficiency of
installing GPUs on all nodes of a cluster. The research reported in this paper
is motivated to address the above question by employing rCUDA (remote CUDA), a
framework that facilitates Acceleration-as-a-Service (AaaS), such that the
nodes of a cluster can request the acceleration of a set of remote GPUs on
demand. The rCUDA framework exploits virtualisation and ensures that multiple
nodes can share the same GPU. In this paper we test the feasibility of the
rCUDA framework on a real-world application employed in the financial risk
industry that can benefit from AaaS in the production setting. The results
confirm the feasibility of rCUDA and highlight that rCUDA achieves similar
performance compared to CUDA, provides consistent results, and more
importantly, allows for a single application to benefit from all the GPUs
available in the cluster without loosing efficiency.Comment: 11th IEEE International Conference on eScience (IEEE eScience) -
Munich, Germany, 201
Preliminary space mission design under uncertainty
This paper proposes a way to model uncertainties and to introduce them explicitly in the design process of a preliminary space mission. Traditionally, a system margin approach is used in order to take them into account. In this paper, Evidence Theory is proposed to crystallise the inherent uncertainties. The design process is then formulated as an Optimisation Under Uncertainties (OUU). Three techniques are proposed to solve the OUU problem: (a) an evolutionary multi-objective approach, (b) a step technique consisting of maximising the belief for different levels of performance, and (c) a clustering method that
firstly identifes feasible regions. The three methods are applied to the BepiColombo mission and their
effectiveness at solving the OUU problem are compared
- …