9,676 research outputs found

    Main specifications of CFD codes for WUIVIEW activities

    Get PDF
    CFD simulations will be the core activity of the WUVIEW performance based fire safety analysis. The purpose of this document is to provide WUIVIEW partners with a general overview of the CFD codes to be used in the Action. The general simulation framework is described, particularly highlighting data inputs and scenario description requirements, to be developed in subsequent WUIVIEW WPs. This TN provides the technical foundations and main specifications of the databases to be designed within the WUIVIEW working program (ongoing action by UPC).Postprint (updated version

    On Realization of Cinema Hall Fire Simulation Using Fire Dynamics Simulator

    Get PDF
    Currently known fire models are capable to describe fire dynamics in complex environments incorporating a wide variety of fire-related physical and chemical phenomena and utilizing large computational power of contemporary computers. In this paper, some issues related to realization of the simulation of fire in a cinema hall with sloping floor and curved ceiling furnished by upholstered seats modelled by FDS (Fire Dynamics Simulator) are discussed. The paper concentrates particularly on the impact of a computational meshes choice on resolving flow field and turbulence in the simulation and indicates problems related to parallelization of the calculation illustrated comparing sequential and parallel MPI calculation using 6 CPU cores. Results of the simulation described and their discussion demonstrate the ability of FDS simulation to capture main tendencies of smoke spread and to forecast the related safety risks realistically

    BioSimulator.jl: Stochastic simulation in Julia

    Full text link
    Biological systems with intertwined feedback loops pose a challenge to mathematical modeling efforts. Moreover, rare events, such as mutation and extinction, complicate system dynamics. Stochastic simulation algorithms are useful in generating time-evolution trajectories for these systems because they can adequately capture the influence of random fluctuations and quantify rare events. We present a simple and flexible package, BioSimulator.jl, for implementing the Gillespie algorithm, τ\tau-leaping, and related stochastic simulation algorithms. The objective of this work is to provide scientists across domains with fast, user-friendly simulation tools. We used the high-performance programming language Julia because of its emphasis on scientific computing. Our software package implements a suite of stochastic simulation algorithms based on Markov chain theory. We provide the ability to (a) diagram Petri Nets describing interactions, (b) plot average trajectories and attached standard deviations of each participating species over time, and (c) generate frequency distributions of each species at a specified time. BioSimulator.jl's interface allows users to build models programmatically within Julia. A model is then passed to the simulate routine to generate simulation data. The built-in tools allow one to visualize results and compute summary statistics. Our examples highlight the broad applicability of our software to systems of varying complexity from ecology, systems biology, chemistry, and genetics. The user-friendly nature of BioSimulator.jl encourages the use of stochastic simulation, minimizes tedious programming efforts, and reduces errors during model specification.Comment: 27 pages, 5 figures, 3 table

    Conedy: a scientific tool to investigate Complex Network Dynamics

    Full text link
    We present Conedy, a performant scientific tool to numerically investigate dynamics on complex networks. Conedy allows to create networks and provides automatic code generation and compilation to ensure performant treatment of arbitrary node dynamics. Conedy can be interfaced via an internal script interpreter or via a Python module

    Experimental Design of a Prescribed Burn Instrumentation

    Full text link
    Observational data collected during experiments, such as the planned Fire and Smoke Model Evaluation Experiment (FASMEE), are critical for progressing and transitioning coupled fire-atmosphere models like WRF-SFIRE and WRF-SFIRE-CHEM into operational use. Historical meteorological data, representing typical weather conditions for the anticipated burn locations and times, have been processed to initialize and run a set of simulations representing the planned experimental burns. Based on an analysis of these numerical simulations, this paper provides recommendations on the experimental setup that include the ignition procedures, size and duration of the burns, and optimal sensor placement. New techniques are developed to initialize coupled fire-atmosphere simulations with weather conditions typical of the planned burn locations and time of the year. Analysis of variation and sensitivity analysis of simulation design to model parameters by repeated Latin Hypercube Sampling are used to assess the locations of the sensors. The simulations provide the locations of the measurements that maximize the expected variation of the sensor outputs with the model parameters.Comment: 35 pages, 4 tables, 28 figure

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    Resilience in Numerical Methods: A Position on Fault Models and Methodologies

    Full text link
    Future extreme-scale computer systems may expose silent data corruption (SDC) to applications, in order to save energy or increase performance. However, resilience research struggles to come up with useful abstract programming models for reasoning about SDC. Existing work randomly flips bits in running applications, but this only shows average-case behavior for a low-level, artificial hardware model. Algorithm developers need to understand worst-case behavior with the higher-level data types they actually use, in order to make their algorithms more resilient. Also, we know so little about how SDC may manifest in future hardware, that it seems premature to draw conclusions about the average case. We argue instead that numerical algorithms can benefit from a numerical unreliability fault model, where faults manifest as unbounded perturbations to floating-point data. Algorithms can use inexpensive "sanity" checks that bound or exclude error in the results of computations. Given a selective reliability programming model that requires reliability only when and where needed, such checks can make algorithms reliable despite unbounded faults. Sanity checks, and in general a healthy skepticism about the correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape

    A survey of high level frameworks in block-structured adaptive mesh refinement packages

    Get PDF
    pre-printOver the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, their design trade-offs and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah
    • 

    corecore