157,152 research outputs found

    Distributed simulation of city inundation by coupled surface and subsurface porous flow for urban flood decision support system

    Get PDF
    We present a decision support system for flood early warning and disaster management. It includes the models for data-driven meteorological predictions, for simulation of atmospheric pressure, wind, long sea waves and seiches; a module for optimization of flood barrier gates operation; models for stability assessment of levees and embankments, for simulation of city inundation dynamics and citizens evacuation scenarios. The novelty of this paper is a coupled distributed simulation of surface and subsurface flows that can predict inundation of low-lying inland zones far from the submerged waterfront areas, as observed in St. Petersburg city during the floods. All the models are wrapped as software services in the CLAVIRE platform for urgent computing, which provides workflow management and resource orchestration.Comment: Pre-print submitted to the 2013 International Conference on Computational Scienc

    Analyzing large-scale DNA Sequences on Multi-core Architectures

    Full text link
    Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library.Comment: The 18th IEEE International Conference on Computational Science and Engineering (CSE 2015), Porto, Portugal, 20 - 23 October 201

    EPiK-a Workflow for Electron Tomography in Kepler.

    Get PDF
    Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility

    Memetic Multilevel Hypergraph Partitioning

    Full text link
    Hypergraph partitioning has a wide range of important applications such as VLSI design or scientific computing. With focus on solution quality, we develop the first multilevel memetic algorithm to tackle the problem. Key components of our contribution are new effective multilevel recombination and mutation operations that provide a large amount of diversity. We perform a wide range of experiments on a benchmark set containing instances from application areas such VLSI, SAT solving, social networks, and scientific computing. Compared to the state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our new algorithm computes the best result on almost all instances
    corecore