20,363 research outputs found

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    Database for validation of thermo-hydro-chemo-mechanical behaviour in bentonites

    Get PDF
    This paper presents a database of thermo-hydro-chemo-mechanical tests on bentonites, which has been named “Bento_DB4THCM”. After a comprehensive literature review, a set of experimental tests have been compiled. The experimental data are used to perform validation exercises for numerical codes to simulate the coupled thermo-hydro-mechanical and geochemical behaviour of bentonites. The database contains the information required for the simulation of each experimental test solving a boundary value problem. The validation exercises cover a wide range of clays, including the best-known bentonites (MX-80, FEBEX, GMZ) as well as others. The results collected in this database are from free swelling, swelling under load, swelling pressure and squeezing tests. The database is attached as Supplementary material.En este artículo se presenta una base de datos de ensayos termo-hidro-quimio-mecánicos sobre bentonitas, a la que se ha denominado “Bento_DB4THCM”. Después de una revisión exhaustiva de la literatura, se ha compilado un conjunto de pruebas experimentales. Los datos experimentales se utilizan para realizar ejercicios de validación de códigos numéricos para simular el comportamiento termohidromecánico y geoquímico acoplado de las bentonitas. La base de datos contiene la información requerida para la simulación de cada prueba experimental que resuelve un problema de valor límite. Los ejercicios de validación cubren una amplia gama de arcillas, incluidas las bentonitas más conocidas (MX-80, FEBEX, GMZ) entre otras. Los resultados recopilados en esta base de datos provienen de pruebas de hinchamiento libre, hinchamiento bajo carga, presión de hinchamiento y compresión. La base de datos se adjunta como material complementario

    Construction of radon chamber to expose active and passive detectors

    Get PDF
    In this research and development, we present the design and manufacture of a radon chamber (PUCP radon chamber), a necessary tool for the calibration of passive detectors, verification of the operation of active radon monitors as well as diffusion chamber calibration used in radon measurements in air, and soils. The first chapter is an introduction to describe radon gas and national levels of radon concentration given by many organizations. Parameters that influence the calibration factor of the LR 115 type 2 film detector are studied, such as the energy window, critical angle, and effective volumes. Those are strongly related to the etching processes and counting of tracks all seen from a semi-empirical approach studied in the second chapter. The third chapter presents a review of some radon chambers that have been reported in the literature, based on their size and mode of operation as well as the radon source they use. The design and construction of the radon chamber are presented, use of uranium ore (autunite) as a chamber source is also discussed. In chapter fourth, radon chamber characterization is presented through leakage lambda, homogeneity of radon concentration, regimes-operation modes, and the saturation concentrations that can be reached. Procedures and methodology used in this work are contained in the fifth chapter and also some uses and applications of the PUCP radon chamber are presented; the calibration of cylindrical metallic diffusion chamber based on CR-39 chips detectors taking into account overlapping effect; transmission factors of gaps and pinhole for the same diffusion chambers are determined; permeability of glass fiber filter for 222Rn is obtained after reach equilibrium through Ramachandran model and taking into account a partition function as the rate of track density. The results of this research have been published in indexed journals. Finally, the conclusion and recommendations that reflect the fulfillment aims of this thesis are presented

    A review of process intensified CO2 capture in RPB for sustainability and contribution to industrial net zero

    Get PDF
    Carbon dioxide (CO2), a significant greenhouse gas released from power plants and industries, substantially impacts climate change; minimizing it and achieving carbon net zero is essential globally. In the direction of reducing CO2 emissions into the atmosphere, post-combustion carbon capture from large point CO2 emitters by chemical absorption involving the absorption of this gas in a capturing fluid is a commonly used and efficacious mechanism. Researchers have worked on the process using conventional columns. However, process intensification technology is required because of the high capital cost, the absorption column height, and the traditional columns’ low energy efficiency. Rotating packed bed (RPB) process intensification equipment has been identified as a suitable technology for enhanced carbon capture using an absorbing fluid. This article reviews and discusses recent model developments in the post-combustion CO2 capture process intensification using rotating packed beds. In the literature, various researchers have developed steady-state mathematical models regarding mass balance and energy balance equations in gas and liquid phases using ordinary or partial differential equations. Due to the circular shape, the equations are considered in a radial direction and have been solved using a numerical approach and simulated using different software platforms, viz. MATLAB, FORTRAN, and gPROMS. A comparison of various correlations has been presented. The models predict the mole fraction of absorbed CO2 and correspond well with the experimental results. Along with these models, an experimental data review on rotating packed bed is also included in this work

    An explicit stabilised finite element method for Navier-Stokes-Brinkman equations

    Get PDF
    We present an explicit stabilised finite element method for solving Navier-Stokes-Brinkman equations. The proposed algorithm has several advantages. First, the lower equal-order finite element space for velocity and pressure is ideal for presenting the pixel images. Stabilised finite element allows the continuity of both tangential and normal velocities at the interface between regions of different micro-permeability or at the interface free/porous domain. Second, the algorithm is fully explicit and versatile for describing complex boundary conditions. Third, the fully explicit matrix–free finite element implementation is ideal for parallelism on high-performance computers. In the last, the implicit treatment of Darcy term allowed larger time stepping and a stable computation, even if the velocity varies for several orders of magnitude in the micro-porous regions (Darcy regime). The stabilisation parameter, that may affect the velocity field, has been discussed and an optimal parameter was chosen based on the numerical examples. Velocity stability at interface between different micro-permeability has been also studied with mesh refinement. We analysed the influence of the micro-permeability field on the regime of the flow (Stokes flow, Darcy flow or a transitional regime). These benchmark tests provide guidelines for choosing the resolution of the grayscale image and its segmentation. We applied the method on real Berea Sandstone micro-CT images, and proceeded the three-phases segmentation. We studied the influence of the micro-porosity field, using the well-known Kozeny-Carman relation to derive the micro-permeability field from the micro-porosity field, on the effective permeability computed. Our analysis shows that a small fraction of micro-porosity in the rock has a significant influence on the effective permeability computed

    Carbon dioxide removal potential from decentralised bioenergy with carbon capture and storage (BECCS) and the relevance of operational choices

    Get PDF
    Bioenergy with carbon capture and storage (BECCS) technology is expected to support net-zero targets by supplying low carbon energy while providing carbon dioxide removal (CDR). BECCS is estimated to deliver 20 to 70 MtCO2 annual negative emissions by 2050 in the UK, despite there are currently no BECCS operating facility. This research is modelling and demonstrating the flexibility, scalability and attainable immediate application of BECCS. The CDR potential for two out of three BECCS pathways considered by the Intergovernmental Panel on Climate Change (IPCC) scenarios were quantified (i) modular-scale CHP process with post-combustion CCS utilising wheat straw and (ii) hydrogen production in a small-scale gasifier with pre-combustion CCS utilising locally sourced waste wood. Process modelling and lifecycle assessment were used, including a whole supply chain analysis. The investigated BECCS pathways could annually remove between −0.8 and −1.4 tCO2e tbiomass−1 depending on operational decisions. Using all the available wheat straw and waste wood in the UK, a joint CDR capacity for both systems could reach about 23% of the UK's CDR minimum target set for BECCS. Policy frameworks prioritising carbon efficiencies can shape those operational decisions and strongly impact on the overall energy and CDR performance of a BECCS system, but not necessarily maximising the trade-offs between biomass use, energy performance and CDR. A combination of different BECCS pathways will be necessary to reach net-zero targets. Decentralised BECCS deployment could support flexible approaches allowing to maximise positive system trade-offs, enable regional biomass utilisation and provide local energy supply to remote areas

    Foundations for programming and implementing effect handlers

    Get PDF
    First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The CPS translation is obtained through a series of refinements of a basic first-order CPS translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually arriving at the notion of generalised continuation, which admit simultaneous support for deep, shallow, and parameterised handlers. The initial refinement adds support for deep handlers by representing stacks of continuations and handlers as a curried sequence of arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the CPS translation is refined once more to obtain an uncurried representation of stacks of continuations and handlers. Finally, the translation is made higher-order in order to contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for deep, shallow, and parameterised effect handlers. kinds of effect handlers. The third strand explores the expressiveness of effect handlers. First, I show that deep, shallow, and parameterised notions of handlers are interdefinable by way of typed macro-expressiveness, which provides a syntactic notion of expressiveness that affirms the existence of encodings between handlers, but it provides no information about the computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control

    Improving the Logarithmic Accuracy of the Angular-Ordered Parton Shower

    Get PDF
    Monte Carlo event generators are a key tool for making theoretical predictions that can be compared with the results of collider experiments, our most accurate probes of fundamental particle physics. New developments in the way parton shower accuracy is assessed have led us to re-examine the accuracy of the angular-ordered parton shower in the Herwig 7 event generator, focussing on the way recoil is handled after successive emissions. We first discuss how the evolution variable is defined in the Herwig angular-ordered shower and how the choice of this definition determines the recoil scheme. We then show how the recoil scheme can affect the logarithmic accuracy of final-state radiation produced by the algorithm. As part of this investigation we consider a new interpretation of the evolution variable intended to mitigate problems with previous iterations of the shower. To test this, simulated events for each scheme are compared with experimental data from both LEP and the LHC. Next we extend our analysis to initial-state radiation and perform the same process of assessing the logarithmic accuracy of different interpretations of the evolution variable. This time, we compare simulated events for each scheme with LHC data for the vector boson production. Additionally, we consider the impact that the choice of NLO matching scheme has on the accuracy of these simulations, with reference to the same LHC data

    Graphical scaffolding for the learning of data wrangling APIs

    Get PDF
    In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
    corecore