29,216 research outputs found

    Parallel processes with implicit computational capital

    Get PDF
    Abstract. We propose a process algebra which is concerned with pro-cesses that have an implicit computational capital. This process algebra goes along with the development that the behaviour of computer-based systems, persons and organizations is increasingly more related to money handling. It is intended to be helpful when designing systems of which the behaviour is related to money handling

    Stable and fast semi-implicit integration of the stochastic Landau-Lifshitz equation

    Get PDF
    We propose new semi-implicit numerical methods for the integration of the stochastic Landau-Lifshitz equation with built-in angular momentum conservation. The performance of the proposed integrators is tested on the 1D Heisenberg chain. For this system, our schemes show better stability properties and allow us to use considerably larger time steps than standard explicit methods. At the same time, these semi-implicit schemes are also of comparable accuracy to and computationally much cheaper than the standard midpoint implicit method. The results are of key importance for atomistic spin dynamics simulations and the study of spin dynamics beyond the macro spin approximation.Comment: 24 pages, 5 figure

    The ESCAPE project : Energy-efficient Scalable Algorithms for Weather Prediction at Exascale

    Get PDF
    In the simulation of complex multi-scale flows arising in weather and climate modelling, one of the biggest challenges is to satisfy strict service requirements in terms of time to solution and to satisfy budgetary constraints in terms of energy to solution, without compromising the accuracy and stability of the application. These simulations require algorithms that minimise the energy footprint along with the time required to produce a solution, maintain the physically required level of accuracy, are numerically stable, and are resilient in case of hardware failure. The European Centre for Medium-Range Weather Forecasts (ECMWF) led the ESCAPE (Energy-efficient Scalable Algorithms for Weather Prediction at Exascale) project, funded by Horizon 2020 (H2020) under the FET-HPC (Future and Emerging Technologies in High Performance Computing) initiative. The goal of ESCAPE was to develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies. The project partners incorporate the expertise of leading European regional forecasting consortia, university research, experienced high-performance computing centres, and hardware vendors. This paper presents an overview of the ESCAPE strategy: (i) identify domain-specific key algorithmic motifs in weather prediction and climate models (which we term Weather & Climate Dwarfs), (ii) categorise them in terms of computational and communication patterns while (iii) adapting them to different hardware architectures with alternative programming models, (iv) analyse the challenges in optimising, and (v) find alternative algorithms for the same scheme. The participating weather prediction models are the following: IFS (Integrated Forecasting System); ALARO, a combination of AROME (Application de la Recherche a l'Operationnel a Meso-Echelle) and ALADIN (Aire Limitee Adaptation Dynamique Developpement International); and COSMO-EULAG, a combination of COSMO (Consortium for Small-scale Modeling) and EULAG (Eulerian and semi-Lagrangian fluid solver). For many of the weather and climate dwarfs ESCAPE provides prototype implementations on different hardware architectures (mainly Intel Skylake CPUs, NVIDIA GPUs, Intel Xeon Phi, Optalysys optical processor) with different programming models. The spectral transform dwarf represents a detailed example of the co-design cycle of an ESCAPE dwarf. The dwarf concept has proven to be extremely useful for the rapid prototyping of alternative algorithms and their interaction with hardware; e.g. the use of a domain-specific language (DSL). Manual adaptations have led to substantial accelerations of key algorithms in numerical weather prediction (NWP) but are not a general recipe for the performance portability of complex NWP models. Existing DSLs are found to require further evolution but are promising tools for achieving the latter. Measurements of energy and time to solution suggest that a future focus needs to be on exploiting the simultaneous use of all available resources in hybrid CPU-GPU arrangements

    Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    Full text link
    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.Comment: 27 pages, LaTeX; corrected typos in Appendix equations A.10 and A.1

    Timed tuplix calculus and the Wesseling and van den Bergh equation

    Get PDF
    We develop an algebraic framework for the description and analysis of financial behaviours, that is, behaviours that consist of transferring certain amounts of money at planned times. To a large extent, analysis of financial products amounts to analysis of such behaviours. We formalize the cumulative interest compliant conservation requirement for financial products proposed by Wesseling and van den Bergh by an equation in the framework developed and define a notion of financial product behaviour using this formalization. We also present some properties of financial product behaviours. The development of the framework has been influenced by previous work on the process algebra ACP.Comment: 17 pages; phrasing improved, references updated; substantially improved; remarks adde

    Together we stand, Together we fall, Together we win: Dynamic Team Formation in Massive Open Online Courses

    Full text link
    Massive Open Online Courses (MOOCs) offer a new scalable paradigm for e-learning by providing students with global exposure and opportunities for connecting and interacting with millions of people all around the world. Very often, students work as teams to effectively accomplish course related tasks. However, due to lack of face to face interaction, it becomes difficult for MOOC students to collaborate. Additionally, the instructor also faces challenges in manually organizing students into teams because students flock to these MOOCs in huge numbers. Thus, the proposed research is aimed at developing a robust methodology for dynamic team formation in MOOCs, the theoretical framework for which is grounded at the confluence of organizational team theory, social network analysis and machine learning. A prerequisite for such an undertaking is that we understand the fact that, each and every informal tie established among students offers the opportunities to influence and be influenced. Therefore, we aim to extract value from the inherent connectedness of students in the MOOC. These connections carry with them radical implications for the way students understand each other in the networked learning community. Our approach will enable course instructors to automatically group students in teams that have fairly balanced social connections with their peers, well defined in terms of appropriately selected qualitative and quantitative network metrics.Comment: In Proceedings of 5th IEEE International Conference on Application of Digital Information & Web Technologies (ICADIWT), India, February 2014 (6 pages, 3 figures

    Horizontal and Vertical Multiple Implementations in a Model of Industrial Districts

    Get PDF
    In this paper we discuss strategies concerning the implementation of an agent-based simulation of complex phenomena. The model we consider accounts for population decomposition and interaction in industrial districts. The approach we follow is twofold: on one hand, we implement progressively more complex models using different approaches (vertical multiple implementations); on the other hand, we replicate the agent-based simulation with different implementations using jESOF, JAS and plain C++ (horizontal multiple implementations). By using both different implementation approaches and a multiple implementation strategy, we highlight the benefits that arise when the same model is implemented on radically different simulation environments, comparing the advantages of multiple modeling implementations. Our findings provide some important suggestions in terms of model validation, showing how models of complex systems tend to be extremely sensitive to implementation details. Finally we point out how statistical techniques may be necessary when comparing different platform implementations of a single model.Replication of Models; Model Validation; Agent-Based Simulation

    Modeling Option and Strategy Choices with Connectionist Networks: Towards an Integrative Model of Automatic and Deliberate Decision Making

    Get PDF
    We claim that understanding human decisions requires that both automatic and deliberate processes be considered. First, we sketch the qualitative differences between two hypothetical processing systems, an automatic and a deliberate system. Second, we show the potential that connectionism offers for modeling processes of decision making and discuss some empirical evidence. Specifically, we posit that the integration of information and the application of a selection rule are governed by the automatic system. The deliberate system is assumed to be responsible for information search, inferences and the modification of the network that the automatic processes act on. Third, we critically evaluate the multiple-strategy approach to decision making. We introduce the basic assumption of an integrative approach stating that individuals apply an all-purpose rule for decisions but use different strategies for information search. Fourth, we develop a connectionist framework that explains the interaction between automatic and deliberate processes and is able to account for choices both at the option and at the strategy level.System 1, Intuition, Reasoning, Control, Routines, Connectionist Model, Parallel Constraint Satisfaction
    • …
    corecore