109 research outputs found

    Comprehensive comprehensions

    Get PDF

    Few Versatile vs. Many Specialized Collections: How to design a collection library for exploratory programming?

    Get PDF
    While an integral part of all programming languages, the design of collection libraries is rarely studied. This work briefly reviews the collection libraries of 14 languages to identify possible design dimensions. Some languages have surprisingly few but versatile collections, while others have large libraries with many specialized collections. Based on the identified design dimensions, we argue that a small collection library with only a sequence, a map, and a set type are a suitable choice to facilitate exploratory programming. Such a design minimizes the number of decisions programmers have to make when dealing with collections, and it improves discoverability of collection operations. We further discuss techniques that make their implementation practical from a performance perspective. Based on these arguments, we conclude that languages which aim to support exploratory programming should strive for small and versatile collection libraries

    Specific "scientific" data structures, and their processing

    Full text link
    Programming physicists use, as all programmers, arrays, lists, tuples, records, etc., and this requires some change in their thought patterns while converting their formulae into some code, since the "data structures" operated upon, while elaborating some theory and its consequences, are rather: power series and Pad\'e approximants, differential forms and other instances of differential algebras, functionals (for the variational calculus), trajectories (solutions of differential equations), Young diagrams and Feynman graphs, etc. Such data is often used in a [semi-]numerical setting, not necessarily "symbolic", appropriate for the computer algebra packages. Modules adapted to such data may be "just libraries", but often they become specific, embedded sub-languages, typically mapped into object-oriented frameworks, with overloaded mathematical operations. Here we present a functional approach to this philosophy. We show how the usage of Haskell datatypes and - fundamental for our tutorial - the application of lazy evaluation makes it possible to operate upon such data (in particular: the "infinite" sequences) in a natural and comfortable manner.Comment: In Proceedings DSL 2011, arXiv:1109.032

    Repetitive Reduction Patterns in Lambda Calculus with letrec (Work in Progress)

    Full text link
    For the lambda-calculus with letrec we develop an optimisation, which is based on the contraction of a certain class of 'future' (also: virtual) redexes. In the implementation of functional programming languages it is common practice to perform beta-reductions at compile time whenever possible in order to produce code that requires fewer reductions at run time. This is, however, in principle limited to redexes and created redexes that are 'visible' (in the sense that they can be contracted without the need for unsharing), and cannot generally be extended to redexes that are concealed by sharing constructs such as letrec. In the case of recursion, concealed redexes become visible only after unwindings during evaluation, and then have to be contracted time and again. We observe that in some cases such redexes exhibit a certain form of repetitive behaviour at run time. We describe an analysis for identifying binders that give rise to such repetitive reduction patterns, and eliminate them by a sort of predictive contraction. Thereby these binders are lifted out of recursive positions or eliminated altogether, as a result alleviating the amount of beta-reductions required for each recursive iteration. Both our analysis and simplification are suitable to be integrated into existing compilers for functional programming languages as an additional optimisation phase. With this work we hope to contribute to increasing the efficiency of executing programs written in such languages.Comment: In Proceedings TERMGRAPH 2011, arXiv:1102.226

    Achieving High-Performance the Functional Way: A Functional Pearl on Expressing High-Performance Optimizations as Rewrite Strategies

    Get PDF
    Optimizing programs to run efficiently on modern parallel hardware is hard but crucial for many applications. The predominantly used imperative languages - like C or OpenCL - force the programmer to intertwine the code describing functionality and optimizations. This results in a portability nightmare that is particularly problematic given the accelerating trend towards specialized hardware devices to further increase efficiency. Many emerging DSLs used in performance demanding domains such as deep learning or high-performance image processing attempt to simplify or even fully automate the optimization process. Using a high-level - often functional - language, programmers focus on describing functionality in a declarative way. In some systems such as Halide or TVM, a separate schedule specifies how the program should be optimized. Unfortunately, these schedules are not written in well-defined programming languages. Instead, they are implemented as a set of ad-hoc predefined APIs that the compiler writers have exposed. In this functional pearl, we show how to employ functional programming techniques to solve this challenge with elegance. We present two functional languages that work together - each addressing a separate concern. RISE is a functional language for expressing computations using well known functional data-parallel patterns. ELEVATE is a functional language for describing optimization strategies. A high-level RISE program is transformed into a low-level form using optimization strategies written in ELEVATE . From the rewritten low-level program high-performance parallel code is automatically generated. In contrast to existing high-performance domain-specific systems with scheduling APIs, in our approach programmers are not restricted to a set of built-in operations and optimizations but freely define their own computational patterns in RISE and optimization strategies in ELEVATE in a composable and reusable way. We show how our holistic functional approach achieves competitive performance with the state-of-the-art imperative systems Halide and TVM

    To-many or to-one? All-in-one! Efficient purely functional multi-maps with type-heterogeneous hash-tries

    Get PDF
    An immutable multi-map is a many-to-many map data structure with expected fast insert and lookup operations. This data structure is used for applications processing graphs or many-to-many relations as applied in compilers, runtimes of programming languages, or in static analysis of object-oriented systems. Collection data structures are assumed to carefully balance execution time of operations with memory consumption characteristics and need to scale gracefully from a few elements to multiple gigabytes at least. When processing larger in-memory data sets the overhead of the data structure encoding itself becomes a memory usage bottleneck, dominating the overall performance. In this paper we propose AXIOM, a novel hash-trie data structure that allows for a highly efficient and type-safe multi-map encoding by distinguishing inlined values of singleton sets from nested sets of multi-mappings

    A computationally efficient Kalman filter based estimator for updating look-up tables applied to NOx estimation in diesel engines

    Full text link
    No-x estimation in diesel engines is an up-to-date problem but still some issues need to be solved. Raw sensor signals are not fast enough for real-time use while control-oriented models suffer from drift and aging. A control-oriented gray box model based on engine maps and calibrated off-line is used as benchmark model for No-x estimation. Calibration effort is important and engine data-dependent. This motivates the use of adaptive look-up tables. In addition to, look-up tables are often used in automotive control systems and there is a need for systematic methods that can estimate or update them on-line. For that purpose, Kalman filter (KF) based methods are explored as having the interesting property of tracking estimation error in a covariance matrix. Nevertheless, when coping with large systems, the computational burden is high, in terms of time and memory, compromising its implementation in commercial electronic control units. However look-up table estimation has a structure, that is here exploited to develop a memory and computationally efficient approximation to the KF, named Simplified Kalman filter (SKF). Convergence and robustness is evaluated in simulation and compared to both a full KF and a minimal steady-state version, that neglects the variance information. SKF is used for the online calibration of an adaptive model for No-x estimation in dynamic engine cycles. Prediction results are compared with the ones of the benchmark model and of the other methods. Furthermore, actual online estimation of No-x is solved by means of the proposed adaptive structure. Results on dynamic tests with a diesel engine and the computational study demonstrate the feasibility and capabilities of the method for an implementation in engine control units. (C) 2013 Elsevier Ltd. All rights reserved.Guardiola, C.; Pla Moreno, B.; Blanco-Rodriguez, D.; Eriksson, L. (2013). A computationally efficient Kalman filter based estimator for updating look-up tables applied to NOx estimation in diesel engines. Control Engineering Practice. 21(11):1455-1468. doi:10.1016/j.conengprac.2013.06.015S14551468211

    A Formal Proof of PAC Learnability for Decision Stumps

    Full text link
    We present a formal proof in Lean of probably approximately correct (PAC) learnability of the concept class of decision stumps. This classic result in machine learning theory derives a bound on error probabilities for a simple type of classifier. Though such a proof appears simple on paper, analytic and measure-theoretic subtleties arise when carrying it out fully formally. Our proof is structured so as to separate reasoning about deterministic properties of a learning function from proofs of measurability and analysis of probabilities.Comment: 13 pages, appeared in Certified Programs and Proofs (CPP) 202

    A Phase 1 Trial of pharmacologic interactions between transdermal selegiline and a 4-hour cocaine infusion

    Get PDF
    BackgroundThe selective MAO-B inhibitor selegiline has been evaluated in clinical trials as a potential medication for the treatment of cocaine dependence. This study evaluated the safety of and pharmacologic interactions between 7 days of transdermal selegiline dosed with patches (Selegiline Transdermal System, STS) that deliver 6 mg/24 hours and 2.5 mg/kg of cocaine administered over 4 hours.MethodsTwelve nondependent cocaine-experienced subjects received deuterium-labeled cocaine-d5 intravenously (IV) 0.5 mg/kg over 10 minutes followed by 2 mg/kg over 4 hours before and after one week of transdermal selegiline 6 mg/24 hours. Plasma and urine were collected for analysis of selegiline, cocaine, catecholamine and metabolite concentrations. Pharmacodynamic measures were obtained.ResultsSelegiline did not change cocaine pharmacokinetic parameters. Selegiline administration increased phenylethylamine (PEA) urinary excretion and decreased urinary MHPG-sulfate concentration after cocaine when compared to cocaine alone. No serious adverse effects occurred with the combination of selegiline and cocaine, and cocaine-induced physiological effects were unchanged after selegiline. Only 1 peak subjective cocaine effects rating changed, and only a few subjective ratings decreased across time after selegiline.ConclusionNo pharmacological interaction occurred between selegiline and a substantial dose of intravenous cocaine, suggesting the combination will be safe in pharmacotherapy trials. Selegiline produced few changes in subjective response to the cocaine challenge perhaps because of some psychoactive neurotransmitters changing in opposite directions

    Search for single production of vector-like quarks decaying into Wb in pp collisions at s=8\sqrt{s} = 8 TeV with the ATLAS detector

    Get PDF
    corecore