112,317 research outputs found

    High-performance explicit transient structural analysis

    Get PDF
    The continuous development of finite element programs creates code which is increasingly difficult to maintain, due to the increasing complexity as well as the chosen programming language. At the same time, the desire to analyse increasingly more complex and larger models requires the use of powerful tools capable of solving these problems in a reasonable amount of time. The research described in this thesis aims to solve these issues of maintainability and performance. For that purpose a new code structure has been designed, and the new finite element program has been adapted to make use of modern high-performance computers. The two principle parts in this thesis are: Design of an object-oriented code base In order to improve maintainability and ease of development, an object-oriented framework for finite element programs, written in C++, has been designed. The structure of the framework is set up in such a way, that the existing computational routines from the old Fortran program can be integrated in the new object-oriented code. Adaptation for high-performance computing Although object-oriented code generates more overhead than traditional procedural programming, the restructuring of the program together with the use of certain advantages of C++ created a new program executable which is faster than the old program. The object-oriented code made the adaptation of the code for parallel computing relatively easy and allowed the programming of the communication to be implemented transparently. New element types and new material models have been implemented to demonstrate the flexibility and ease of development of the new code. Three applications are presented to demonstrate the possibilities of the new finite element program

    Pervasive Parallel And Distributed Computing In A Liberal Arts College Curriculum

    Get PDF
    We present a model for incorporating parallel and distributed computing (PDC) throughout an undergraduate CS curriculum. Our curriculum is designed to introduce students early to parallel and distributed computing topics and to expose students to these topics repeatedly in the context of a wide variety of CS courses. The key to our approach is the development of a required intermediate-level course that serves as a introduction to computer systems and parallel computing. It serves as a requirement for every CS major and minor and is a prerequisite to upper-level courses that expand on parallel and distributed computing topics in different contexts. With the addition of this new course, we are able to easily make room in upper-level courses to add and expand parallel and distributed computing topics. The goal of our curricular design is to ensure that every graduating CS major has exposure to parallel and distributed computing, with both a breadth and depth of coverage. Our curriculum is particularly designed for the constraints of a small liberal arts college, however, much of its ideas and its design are applicable to any undergraduate CS curriculum

    Modeling Adaptation with Klaim

    Get PDF
    In recent years, it has been argued that systems and applications, in order to deal with their increasing complexity, should be able to adapt their behavior according to new requirements or environment conditions. In this paper, we present an investigation aiming at studying how coordination languages and formal methods can contribute to a better understanding, implementation and use of the mechanisms and techniques for adaptation currently proposed in the literature. Our study relies on the formal coordination language Klaim as a common framework for modeling some well-known adaptation techniques: the IBM MAPE-K loop, the Accord component-based framework for architectural adaptation, and the aspect- and context-oriented programming paradigms. We illustrate our approach through a simple example concerning a data repository equipped with an automated cache mechanism

    Digital signal processing: the impact of convergence on education, society and design flow

    Get PDF
    Design and development of real-time, memory and processor hungry digital signal processing systems has for decades been accomplished on general-purpose microprocessors. Increasing needs for high-performance DSP systems made these microprocessors unattractive for such implementations. Various attempts to improve the performance of these systems resulted in the use of dedicated digital signal processing devices like DSP processors and the former heavyweight champion of electronics design – Application Specific Integrated Circuits. The advent of RAM-based Field Programmable Gate Arrays has changed the DSP design flow. Software algorithmic designers can now take their DSP algorithms right from inception to hardware implementation, thanks to the increasing availability of software/hardware design flow or hardware/software co-design. This has led to a demand in the industry for graduates with good skills in both Electrical Engineering and Computer Science. This paper evaluates the impact of technology on DSP-based designs, hardware design languages, and how graduate/undergraduate courses have changed to suit this transition

    BioSimulator.jl: Stochastic simulation in Julia

    Full text link
    Biological systems with intertwined feedback loops pose a challenge to mathematical modeling efforts. Moreover, rare events, such as mutation and extinction, complicate system dynamics. Stochastic simulation algorithms are useful in generating time-evolution trajectories for these systems because they can adequately capture the influence of random fluctuations and quantify rare events. We present a simple and flexible package, BioSimulator.jl, for implementing the Gillespie algorithm, τ\tau-leaping, and related stochastic simulation algorithms. The objective of this work is to provide scientists across domains with fast, user-friendly simulation tools. We used the high-performance programming language Julia because of its emphasis on scientific computing. Our software package implements a suite of stochastic simulation algorithms based on Markov chain theory. We provide the ability to (a) diagram Petri Nets describing interactions, (b) plot average trajectories and attached standard deviations of each participating species over time, and (c) generate frequency distributions of each species at a specified time. BioSimulator.jl's interface allows users to build models programmatically within Julia. A model is then passed to the simulate routine to generate simulation data. The built-in tools allow one to visualize results and compute summary statistics. Our examples highlight the broad applicability of our software to systems of varying complexity from ecology, systems biology, chemistry, and genetics. The user-friendly nature of BioSimulator.jl encourages the use of stochastic simulation, minimizes tedious programming efforts, and reduces errors during model specification.Comment: 27 pages, 5 figures, 3 table

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
    • 

    corecore