48,479 research outputs found

    A Concurrent Language with a Uniform Treatment of Regions and Locks

    Full text link
    A challenge for programming language research is to design and implement multi-threaded low-level languages providing static guarantees for memory safety and freedom from data races. Towards this goal, we present a concurrent language employing safe region-based memory management and hierarchical locking of regions. Both regions and locks are treated uniformly, and the language supports ownership transfer, early deallocation of regions and early release of locks in a safe manner

    The formal verification of generic interpreters

    Get PDF
    The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does

    Transients from Initial Conditions: A Perturbative Analysis

    Get PDF
    The standard procedure to generate initial conditions (IC) in numerical simulations is to use the Zel'dovich approximation (ZA). Although the ZA correctly reproduces the linear growing modes of density and velocity perturbations, non-linear growth is inaccurately represented because of the ZA failure to conserve momentum. This implies that it takes time for the actual dynamics to establish the correct statistical properties of density and velocity fields. We extend perturbation theory (PT) to include transients as non-linear excitations of decaying modes caused by the IC. We focus on higher-order statistics of the density contrast and velocity divergence, characterized by the S_p and T_p parameters. We find that the time-scale of transients is determined, at a given order p, by the spectral index n. The skewness factor S_3 (T_3) attains 10% accuracy only after a=6 (a=15) for n=0, whereas higher (lower) n demands more (less) expansion away from the IC. These requirements become much more stringent as p increases. An Omega=0.3 model requires a factor of two larger expansion than an Omega=1 model to reduce transients by the same amount. The predicted transients in S_p are in good agreement with numerical simulations. More accurate IC can be achieved by using 2nd order Lagrangian PT (2LPT), which reproduces growing modes up to 2nd order and thus eliminates transients in the skewness. We show that for p>3 this reduces the required expansion by more than an order of magnitude compared to the ZA. Setting up 2LPT IC only requires minimal, inexpensive changes to ZA codes. We suggest simple steps for its implementation.Comment: 37 pages, 10 figure

    Development of grid frameworks for clinical trials and epidemiological studies

    Get PDF
    E-Health initiatives such as electronic clinical trials and epidemiological studies require access to and usage of a range of both clinical and other data sets. Such data sets are typically only available over many heterogeneous domains where a plethora of often legacy based or in-house/bespoke IT solutions exist. Considerable efforts and investments are being made across the UK to upgrade the IT infrastructures across the National Health Service (NHS) such as the National Program for IT in the NHS (NPFIT) [1]. However, it is the case that currently independent and largely non-interoperable IT solutions exist across hospitals, trusts, disease registries and GP practices – this includes security as well as more general compute and data infrastructures. Grid technology allows issues of distribution and heterogeneity to be overcome, however the clinical trials domain places special demands on security and data which hitherto the Grid community have not satisfactorily addressed. These challenges are often common across many studies and trials hence the development of a re-usable framework for creation and subsequent management of such infrastructures is highly desirable. In this paper we present the challenges in developing such a framework and outline initial scenarios and prototypes developed within the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project [2]

    Chaste: a test-driven approach to software development for biological modelling

    Get PDF
    Chaste (‘Cancer, heart and soft-tissue environment’) is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence.\ud \ud Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling

    Anytime Hierarchical Clustering

    Get PDF
    We propose a new anytime hierarchical clustering method that iteratively transforms an arbitrary initial hierarchy on the configuration of measurements along a sequence of trees we prove for a fixed data set must terminate in a chain of nested partitions that satisfies a natural homogeneity requirement. Each recursive step re-edits the tree so as to improve a local measure of cluster homogeneity that is compatible with a number of commonly used (e.g., single, average, complete) linkage functions. As an alternative to the standard batch algorithms, we present numerical evidence to suggest that appropriate adaptations of this method can yield decentralized, scalable algorithms suitable for distributed/parallel computation of clustering hierarchies and online tracking of clustering trees applicable to large, dynamically changing databases and anomaly detection.Comment: 13 pages, 6 figures, 5 tables, in preparation for submission to a conferenc
    corecore