26,391 research outputs found
Enhanced Operational Semantics in Systems Biology
We are faced with a great challenge: the cross-fertilization between the fields of formal methods for concurrency, in the computer science domain, and systems biology in the biological realm
Achieving Starvation-Freedom with Greater Concurrency in Multi-Version Object-based Transactional Memory Systems
To utilize the multi-core processors properly concurrent programming is
needed. Concurrency control is the main challenge while designing a correct and
efficient concurrent program. Software Transactional Memory Systems (STMs)
provides ease of multithreading to the programmer without worrying about
concurrency issues such as deadlock, livelock, priority inversion, etc. Most of
the STMs works on read-write operations known as RWSTMs. Some STMs work at
high-level operations and ensure greater concurrency than RWSTMs. Such STMs are
known as Object-Based STMs (OSTMs). The transactions of OSTMs can return commit
or abort. Aborted OSTMs transactions retry. But in the current setting of
OSTMs, transactions may starve. So, we proposed a Starvation-Free OSTM
(SF-OSTM) which ensures starvation-freedom in object based STM systems while
satisfying the correctness criteria as co-opacity. Databases, RWSTMs and OSTMs
say that maintaining multiple versions corresponding to each key of transaction
reduces the number of aborts and improves the throughput. So, to achieve
greater concurrency, we proposed Starvation-Free Multi-Version OSTM (SF-MVOSTM)
which ensures starvation-freedom while storing multiple versions corresponding
to each key and satisfies the correctness criteria such as local opacity. To
show the performance benefits, We implemented three variants of SF-MVOSTM
(SF-MVOSTM, SF-MVOSTM-GC and SF-KOSTM) and compared it with state-of-the-art
STMs.Comment: 68 pages, 24 figures. arXiv admin note: text overlap with
arXiv:1709.0103
The First Provenance Challenge
The first Provenance Challenge was set up in order to provide a forum for the community to help understand the capabilities of different provenance systems and the expressiveness of their provenance representations. To this end, a Functional Magnetic Resonance Imaging workflow was defined, which participants had to either simulate or run in order to produce some provenance representation, from which a set of identified queries had to be implemented and executed. Sixteen teams responded to the challenge, and submitted their inputs. In this paper, we present the challenge workflow and queries, and summarise the participants contributions
Building scalable software systems in the multicore era
Software systems must face two challenges today: growing complexity and increasing parallelism in the underlying computational models. The problem of increased complexity is often solved by dividing systems into modules in a way that permits analysis of these modules in isolation. The problem of lack of concurrency is often tackled by dividing system execution into tasks that permits execution of these tasks in isolation. The key challenge in software design is to manage the explicit and implicit dependence between modules that decreases modularity. The key challenge for concurrency is to manage the explicit and implicit dependence between tasks that decreases parallelism. Even though these challenges appear to be strikingly similar, current software design practices and languages do not take advantage of this similarity. The net effect is that the modularity and concurrency goals are often tackled mutually exclusively. Making progress towards one goal does not naturally contribute towards the other. My position is that for programmers that are not formally and rigorously trained in the concurrency discipline the safest and most productive way to get scalability in their software is by improving modularity of their software using programming language features and design practices that reconcile modularity and concurrency goals. I briefly discuss preliminary efforts of my group, but we have only touched the tip of the iceberg
Array languages and the N-body problem
This paper is a description of the contributions to the SICSA multicore challenge on many body
planetary simulation made by a compiler group at the University of Glasgow. Our group is part of
the Computer Vision and Graphics research group and we have for some years been developing array
compilers because we think these are a good tool both for expressing graphics algorithms and for
exploiting the parallelism that computer vision applications require.
We shall describe experiments using two languages on two different platforms and we shall compare
the performance of these with reference C implementations running on the same platforms. Finally
we shall draw conclusions both about the viability of the array language approach as compared to
other approaches used in the challenge and also about the strengths and weaknesses of the two, very
different, processor architectures we used
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- …