123 research outputs found

    Loo.py: From Fortran to performance via transformation and substitution rules

    Full text link
    A large amount of numerically-oriented code is written and is being written in legacy languages. Much of this code could, in principle, make good use of data-parallel throughput-oriented computer architectures. Loo.py, a transformation-based programming system targeted at GPUs and general data-parallel architectures, provides a mechanism for user-controlled transformation of array programs. This transformation capability is designed to not just apply to programs written specifically for Loo.py, but also those imported from other languages such as Fortran. It eases the trade-off between achieving high performance, portability, and programmability by allowing the user to apply a large and growing family of transformations to an input program. These transformations are expressed in and used from Python and may be applied from a variety of settings, including a pragma-like manner from other languages.Comment: ARRAY 2015 - 2nd ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2015

    TuBound - A Conceptually New Tool for Worst-Case Execution Time Analysis

    Get PDF
    TuBound is a conceptually new tool for the worst-case execution time (WCET) analysis of programs. A distinctive feature of TuBound is the seamless integration of a WCET analysis component and of a compiler in a uniform tool. TuBound enables the programmer to provide hints improving the precision of the WCET computation on the high-level program source code, while preserving the advantages of using an optimizing compiler and the accuracy of a WCET analysis performed on the low-level machine code. This way, TuBound ideally serves the needs of both the programmer and the WCET analysis by providing them the interface on the very abstraction level that is most appropriate and convenient to them. In this paper we present the system architecture of TuBound, discuss the internal work-flow of the tool, and report on first measurements using benchmarks from Maelardalen University. TuBound took also part in the WCET Tool Challenge 2008

    08161 Abstracts Collection -- Scalable Program Analysis

    Get PDF
    From April 13 to April 18, 2008, the Dagstuhl Seminar 08161 ``Scalable Program Analysis\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    WCET Analysis: The Annotation Language Challenge

    Get PDF
    Worst-case execution time (WCET) analysis is indispensable for the successful design and development of systems, which, in addition to their functional constraints, have to satisfy hard real-time constraints. The expressiveness and usability of annotation languages, which are used by algorithms and tools for WCET analysis in order to separate feasible from infeasible program paths, have a crucial impact on the precision and performance of these algorithms and tools. In this paper, we thus propose to complement the WCET tool challenge, which has recently successfully been launched, by a second closely related challenge: the WCET annotation language challenge. We believe that contributions towards mastering this challenge will be essential for the next major step of advancing the field of WCET analysis

    ExprEssence - Revealing the essence of differential experimental data in the context of an interaction/regulation net-work

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Experimentalists are overwhelmed by high-throughput data and there is an urgent need to condense information into simple hypotheses. For example, large amounts of microarray and deep sequencing data are becoming available, describing a variety of experimental conditions such as gene knockout and knockdown, the effect of interventions, and the differences between tissues and cell lines.</p> <p>Results</p> <p>To address this challenge, we developed a method, implemented as a Cytoscape plugin called <it>ExprEssence</it>. As input we take a network of interaction, stimulation and/or inhibition links between genes/proteins, and differential data, such as gene expression data, tracking an intervention or development in time. We condense the network, highlighting those links across which the largest changes can be observed. Highlighting is based on a simple formula inspired by the law of mass action. We can interactively modify the threshold for highlighting and instantaneously visualize results. We applied <it>ExprEssence </it>to three scenarios describing kidney podocyte biology, pluripotency and ageing: 1) We identify putative processes involved in podocyte (de-)differentiation and validate one prediction experimentally. 2) We predict and validate the expression level of a transcription factor involved in pluripotency. 3) Finally, we generate plausible hypotheses on the role of apoptosis, cell cycle deregulation and DNA repair in ageing data obtained from the hippocampus.</p> <p>Conclusion</p> <p>Reducing the size of gene/protein networks to the few links affected by large changes allows to screen for putative mechanistic relationships among the genes/proteins that are involved in adaptation to different experimental conditions, yielding important hypotheses, insights and suggestions for new experiments. We note that we do not focus on the identification of 'active subnetworks'. Instead we focus on the identification of single links (which may or may not form subnetworks), and these single links are much easier to validate experimentally than submodules. <it>ExprEssence </it>is available at <url>http://sourceforge.net/projects/expressence/</url>.</p

    A Case Study for Reversible Computing: Reversible Debugging of Concurrent Programs

    Get PDF
    International audienceReversible computing allows one to run programs not only in the usual forward direction, but also backward. A main application area for reversible computing is debugging, where one can use reversibility to go backward from a visible misbehaviour towards the bug causing it. While reversible debugging of sequential systems is well understood, reversible debugging of concurrent and distributed systems is less settled. We present here two approaches for debugging concurrent programs, one based on backtracking, which undoes actions in reverse order of execution, and one based on causal consistency, which allows one to undo any action provided that its consequences, if any, are undone beforehand. The first approach tackles an imperative language with shared memory, while the second one considers a core of the functional message-passing language Erlang. Both the approaches are based on solid formal foundations

    Toward quantitative proteomics of organ substructures: implications for renal physiology

    Full text link
    Organs are complex structures that consist of multiple tissues with different levels of gene expression. To achieve comprehensive coverage and accurate quantitation data, organs ideally should be separated into morphologic and/or functional substructures before gene or protein expression analysis. However, because of complex morphology and elaborate isolation protocols, to date this often has been difficult to achieve. Kidneys are organs in which functional and morphologic subdivision is especially important. Each subunit of the kidney, the nephron, consists of more than 10 subsegments with distinct morphologic and functional characteristics. For a full understanding of kidney physiology, global gene and protein expression analyses have to be performed at the level of the nephron subsegments; however, such studies have been extremely rare to date. Here we describe the latest approaches in quantitative high-accuracy mass spectrometry-based proteomics and their application to quantitative proteomics studies of the whole kidney and nephron subsegments, both in human beings and in animal models. We compare these studies with similar studies performed on other organ substructures. We argue that the newest technologies used for preparation, processing, and measurement of small amounts of starting material are finally enabling global and subsegment-specific quantitative measurement of protein levels in the kidney and other organs. These new technologies and approaches are making a decisive impact on our understanding of the (patho)physiological processes at the molecular level

    TOOLympics 2019: An Overview of Competitions in Formal Methods

    Get PDF
    Evaluation of scientific contributions can be done in many different ways. For the various research communities working on the verification of systems (software, hardware, or the underlying involved mechanisms), it is important to bring together the community and to compare the state of the art, in order to identify progress of and new challenges in the research area. Competitions are a suitable way to do that. The first verification competition was created in 1992 (SAT competition), shortly followed by the CASC competition in 1996. Since the year 2000, the number of dedicated verification competitions is steadily increasing. Many of these events now happen regularly, gathering researchers that would like to understand how well their research prototypes work in practice. Scientific results have to be reproducible, and powerful computers are becoming cheaper and cheaper, thus, these competitions are becoming an important means for advancing research in verification technology. TOOLympics 2019 is an event to celebrate the achievements of the various competitions, and to understand their commonalities and differences. This volume is dedicated to the presentation of the 16 competitions that joined TOOLympics as part of the celebration of the 25th anniversary of the TACAS conference
    corecore