3,197 research outputs found

    Haskell for OCaml programmers

    Get PDF
    This introduction to Haskell is written to optimize learning by programmers who already know OCaml.Comment: 16 page

    Rust for functional programmers

    Get PDF
    This article provides an introduction to Rust, a systems language by Mozilla, to programmers already familiar with Haskell, OCaml or other functional languages.Comment: 17 page

    SL: a "quick and dirty" but working intermediate language for SVP systems

    Get PDF
    The CSA group at the University of Amsterdam has developed SVP, a framework to manage and program many-core and hardware multithreaded processors. In this article, we introduce the intermediate language SL, a common vehicle to program SVP platforms. SL is designed as an extension to the standard C language (ISO C99/C11). It includes primitive constructs to bulk create threads, bulk synchronize on termination of threads, and communicate using word-sized dataflow channels between threads. It is intended for use as target language for higher-level parallelizing compilers. SL is a research vehicle; as of this writing, it is the only interface language to program a main SVP platform, the new Microgrid chip architecture. This article provides an overview of the language, to complement a detailed specification available separately.Comment: 22 pages, 3 figures, 18 listings, 1 tabl

    Optimizing for confidence - Costs and opportunities at the frontier between abstraction and reality

    Full text link
    Is there a relationship between computing costs and the confidence people place in the behavior of computing systems? What are the tuning knobs one can use to optimize systems for human confidence instead of correctness in purely abstract models? This report explores these questions by reviewing the mechanisms by which people build confidence in the match between the physical world behavior of machines and their abstract intuition of this behavior according to models or programming language semantics. We highlight in particular that a bottom-up approach relies on arbitrary trust in the accuracy of I/O devices, and that there exists clear cost trade-offs in the use of I/O devices in computing systems. We also show various methods which alleviate the need to trust I/O devices arbitrarily and instead build confidence incrementally "from the outside" by considering systems as black box entities. We highlight cases where these approaches can reach a given confidence level at a lower cost than bottom-up approaches.Comment: 11 pages, 1 figur

    Characterizing traits of coordination

    Full text link
    How can one recognize coordination languages and technologies? As this report shows, the common approach that contrasts coordination with computation is intellectually unsound: depending on the selected understanding of the word "computation", it either captures too many or too few programming languages. Instead, we argue for objective criteria that can be used to evaluate how well programming technologies offer coordination services. Of the various criteria commonly used in this community, we are able to isolate three that are strongly characterizing: black-box componentization, which we had identified previously, but also interface extensibility and customizability of run-time optimization goals. These criteria are well matched by Intel's Concurrent Collections and AstraKahn, and also by OpenCL, POSIX and VMWare ESX.Comment: 11 pages, 3 table

    The essence of component-based design and coordination

    Full text link
    Is there a characteristic of coordination languages that makes them qualitatively different from general programming languages and deserves special academic attention? This report proposes a nuanced answer in three parts. The first part highlights that coordination languages are the means by which composite software applications can be specified using components that are only available separately, or later in time, via standard interfacing mechanisms. The second part highlights that most currently used languages provide mechanisms to use externally provided components, and thus exhibit some elements of coordination. However not all do, and the availability of an external interface thus forms an objective and qualitative criterion that distinguishes coordination. The third part argues that despite the qualitative difference, the segregation of academic attention away from general language design and implementation has non-obvious cost trade-offs.Comment: 8 pages, 2 figures, 3 table

    Luminosity Spectrum Reconstruction at Linear Colliders

    Get PDF
    A good knowledge of the luminosity spectrum is mandatory for many measurements at future e+e- colliders. As the beam-parameters determining the luminosity spectrum cannot be measured precisely, the luminosity spectrum has to be measured through a gauge process with the detector. The measured distributions, used to reconstruct the spectrum, depend on Initial State Radiation, cross-section, and Final State Radiation. To extract the basic luminosity spectrum, a parametric model of the luminosity spectrum is created, in this case the spectrum at the 3 TeV Compact Linear Collider (CLIC). The model is used within a reweighting technique to extract the luminosity spectrum from measured Bhabha event observables, taking all relevant effects into account. The centre-of-mass energy spectrum is reconstructed within 5% over the full validity range of the model. The reconstructed spectrum does not result in a significant bias or systematic uncertainty in the exemplary physics benchmark process of smuon pair production.Comment: Version accepted by EPJC. Minor change

    Development of a Case-Mix Funding System for Adults with Combined Vision and Hearing Loss

    Get PDF
    Background: Adults with vision and hearing loss, or dual sensory loss (DSL), present with a wide range of needs and abilities. This creates many challenges when attempting to set the most appropriate and equitable funding levels. Case-mix (CM) funding models represent one method for understanding client characteristics that correlate with resource intensity. Methods: A CM model was developed based on a derivation sample (n = 182) and tested with a replication sample (n = 135) of adults aged 18+ with known DSL who were living in the community. All items within the CM model came from a standardized, multidimensional assessment, the interRAI Community Health Assessment and the Deafblind Supplement. The main outcome was a summary of formal and informal service costs which included intervenor and interpreter support, in-home nursing, personal support and rehabilitation services. Informal costs were estimated based on a wage rate of half that for a professional service provider ($10/hour). Decision-tree analysis was used to create groups with homogeneous resource utilization. Results: The resulting CM model had 9 terminal nodes. The CM index (CMI) showed a 35-fold range for total costs. In both the derivation and replication sample, 4 groups (out of a total of 18 or 22.2%) had a coefficient of variation value that exceeded the overall level of variation. Explained variance in the derivation sample was 67.7% for total costs versus 28.2% in the replication sample. A strong correlation was observed between the CMI values in the two samples (r = 0.82; p = 0.006). Conclusions: The derived CM funding model for adults with DSL differentiates resource intensity across 9 main groups and in both datasets there is evidence that these CM groups appropriately identify clients based on need for formal and informal support
    • …
    corecore