24 research outputs found

    Valgrind A Program Supervision Framework

    Get PDF
    AbstractValgrind is a programmable framework for creating program supervision tools such as bug detectors and profilers. It executes supervised programs using dynamic binary translation, giving it total control over their every part without requiring source code, and without the need for recompilation or relinking prior to execution.New supervision tools can be easily created by writing skins that plug into Valgrind's core. As an example, we describe one skin that performs Purify-style memory checks for C and C++ programs

    An Introduction to the Chandra Carina Complex Project

    Get PDF
    The Great Nebula in Carina provides an exceptional view into the violent massive star formation and feedback that typifies giant HII regions and starburst galaxies. We have mapped the Carina star-forming complex in X-rays, using archival Chandra data and a mosaic of 20 new 60ks pointings using the Chandra X-ray Observatory's Advanced CCD Imaging Spectrometer, as a testbed for understanding recent and ongoing star formation and to probe Carina's regions of bright diffuse X-ray emission. This study has yielded a catalog of properties of >14,000 X-ray point sources; >9800 of them have multiwavelength counterparts. Using Chandra's unsurpassed X-ray spatial resolution, we have separated these point sources from the extensive, spatially-complex diffuse emission that pervades the region; X-ray properties of this diffuse emission suggest that it traces feedback from Carina's massive stars. In this introductory paper, we motivate the survey design, describe the Chandra observations, and present some simple results, providing a foundation for the 15 papers that follow in this Special Issue and that present detailed catalogs, methods, and science results.Comment: Accepted for the ApJS Special Issue on the Chandra Carina Complex Project (CCCP), scheduled for publication in May 2011. All 16 CCCP Special Issue papers are available at http://cochise.astro.psu.edu/Carina_public/special_issue.html through 2011 at least. 43 pages; 18 figure

    Tephrochronology and its application: A review

    Get PDF
    Tephrochronology (from tephra, Gk ‘ashes’) is a unique stratigraphic method for linking, dating, and synchronizing geological, palaeoenvironmental, or archaeological sequences or events. As well as utilising the Law of Superposition, tephrochronology in practise requires tephra deposits to be characterized (or ‘fingerprinted’) using physical properties evident in the field together with those obtained from laboratory analyses. Such analyses include mineralogical examination (petrography) or geochemical analysis of glass shards or crystals using an electron microprobe or other analytical tools including laser-ablation-based mass spectrometry or the ion microprobe. The palaeoenvironmental or archaeological context in which a tephra occurs may also be useful for correlational purposes. Tephrochronology provides greatest utility when a numerical age obtained for a tephra or cryptotephra is transferrable from one site to another using stratigraphy and by comparing and matching inherent compositional features of the deposits with a high degree of likelihood. Used this way, tephrochronology is an age-equivalent dating method that provides an exceptionally precise volcanic-event stratigraphy. Such age transfers are valid because the primary tephra deposits from an eruption essentially have the same short-lived age everywhere they occur, forming isochrons very soon after the eruption (normally within a year). As well as providing isochrons for palaeoenvironmental and archaeological reconstructions, tephras through their geochemical analysis allow insight into volcanic and magmatic processes, and provide a comprehensive record of explosive volcanism and recurrence rates in the Quaternary (or earlier) that can be used to establish time-space relationships of relevance to volcanic hazard analysis. The basis and application of tephrochronology as a central stratigraphic and geochronological tool for Quaternary studies are presented and discussed in this review. Topics covered include principles of tephrochronology, defining isochrons, tephra nomenclature, mapping and correlating tephras from proximal to distal locations at metre- through to sub-millimetre-scale, cryptotephras, mineralogical and geochemical fingerprinting methods, numerical and statistical correlation techniques, and developments and applications in dating including the use of flexible depositional age-modelling techniques based on Bayesian statistics. Along with reference to wide-ranging examples and the identification of important recent advances in tephrochronology, such as the development of new geoanalytical approaches that enable individual small glass shards to be analysed near-routinely for major, trace, and rare-earth elements, potential problems such as miscorrelation, erroneous-age transfer, and tephra reworking and taphonomy (especially relating to cryptotephras) are also examined. Some of the challenges for future tephrochronological studies include refining geochemical analytical methods further, improving understanding of cryptotephra distribution and preservation patterns, improving age modelling including via new or enhanced radiometric or incremental techniques and Bayesian-derived models, evaluating and quantifying uncertainty in tephrochronology to a greater degree than at present, constructing comprehensive regional databases, and integrating tephrochronology with spatially referenced environmental and archaeometric data into 3-D reconstructions using GIS and geostatistics

    Polymorphic Strictness Analysis Using Frontiers

    No full text
    This paper shows how to implement sensible polymorphic strictness analysis using the Frontiers algorithm. A central notion is to only ever analyse each function once, at its simplest polymorphic instance. Subsequent non-base uses of functions are dealt with by generalising their simplest instance analyses. This generalisation is done using an algorithm developed by Baraki, based on embedding-closure pairs. Compared with an alternative approach of expanding the program out into a collection of monomorphic instances, this technique is hundreds of times faster for realistic programs. There are some approximations involved, but these do not seem to have a detrimental effect on the overall result. The overall effect of this technology is to considerably expand the range of programs for which the Frontiers algorithm gives useful results reasonably quickly. 1 Introduction The Frontiers algorithm was introduced in [CP85 ] as an allegedly efficient way of doing forwards strictness analysis, al..

    How to shadow every byte of memory used by a program

    No full text
    Several existing dynamic binary analysis tools use shadow mem-ory—they shadow, in software, every byte of memory used by a program with another value that says something about it. Shadow memory is difficult to implement both efficiently and robustly. Nonetheless, existing shadow memory implementations have not been studied in detail. This is unfortunate, because shadow mem-ory is powerful—for example, some of the existing tools that use it detect critical errors such as bad memory accesses, data races, and uses of uninitialised or untrusted data. In this paper we describe the implementation of shadow mem-ory in Memcheck, a popular memory checker built with Valgrind, a dynamic binary instrumentation framework. This implementation has several novel features that make it efficient: carefully chosen data structures and operations result in a mean slow-down factor of only 22.2 and moderate memory usage. This may sound slow, but we show it is 8.9 times faster and 8.5 times smaller on average than a naive implementation, and shadow memory operations account for only about half of Memcheck’s execution time. Equally impor-tantly, unlike some tools, Memcheck’s shadow memory implemen-tation is robust: it is used on Linux by thousands of programmers on sizeable programs such as Mozilla and OpenOffice, and is suited to almost any memory configuration. This is the first detailed description of a robust shadow mem-ory implementation, and the first detailed experimental evaluation of any shadow memory implementation. The ideas within are ap-plicable to any shadow memory tool built with any instrumentation framework

    The NewFLOW Computational Model and Intermediate Format - Version 1.04

    No full text
    This report motivates and defines a general-purpose, architecture independent, parallel computational model, which captures the intuitions which underlie the design of the United Functions and Objects (UFO) programming language. The model has two aspects, which turn out to be a traditional dataflow model and an actor-like model, with a very simple interface between the two. Certain aspects of the model, particularly strictness, maximum parallelism, and lack of suspension are stressed. The implications of introducing stateful objects are carefully spelled out. The model has several purposes, although we primarily describe it as a vehicle for the compilation and optimisation of UFO, and for visualising the execution of programs. Having motivated the model, this report specifies, in detail, both the syntax and semantics of the model, and provides some examples of its use. 1 Motivation The primary purpose of this report is to define the semantics and syntax of NewFLOW, an intermediate rep..
    corecore