14,806 research outputs found

    The Requirements Editor RED

    Get PDF

    A scalable parallel finite element framework for growing geometries. Application to metal additive manufacturing

    Get PDF
    This work introduces an innovative parallel, fully-distributed finite element framework for growing geometries and its application to metal additive manufacturing. It is well-known that virtual part design and qualification in additive manufacturing requires highly-accurate multiscale and multiphysics analyses. Only high performance computing tools are able to handle such complexity in time frames compatible with time-to-market. However, efficiency, without loss of accuracy, has rarely held the centre stage in the numerical community. Here, in contrast, the framework is designed to adequately exploit the resources of high-end distributed-memory machines. It is grounded on three building blocks: (1) Hierarchical adaptive mesh refinement with octree-based meshes; (2) a parallel strategy to model the growth of the geometry; (3) state-of-the-art parallel iterative linear solvers. Computational experiments consider the heat transfer analysis at the part scale of the printing process by powder-bed technologies. After verification against a 3D benchmark, a strong-scaling analysis assesses performance and identifies major sources of parallel overhead. A third numerical example examines the efficiency and robustness of (2) in a curved 3D shape. Unprecedented parallelism and scalability were achieved in this work. Hence, this framework contributes to take on higher complexity and/or accuracy, not only of part-scale simulations of metal or polymer additive manufacturing, but also in welding, sedimentation, atherosclerosis, or any other physical problem where the physical domain of interest grows in time

    Determination of Black Hole Masses in Galactic Black Hole Binaries using Scaling of Spectral and Variability Characteristics

    Full text link
    We present a study of correlations between X-ray spectral and timing properties observed from a number of Galactic Black Hole (BH) binaries during hard-soft state spectral evolution. We analyze 17 transition episodes from 8 BH sources observed with RXTE. Our scaling technique for BH mass determination uses a correlation between spectral index and quasi-periodic oscillation (QPO) frequency. In addition, we use a correlation between index and the normalization of the disk "seed" component to cross-check the BH mass determination and estimate the distance to the source. While the index-QPO correlations for two given sources contain information on the ratio of the BH masses in those sources, the index-normalization correlations depend on the ratio of the BH masses and the distance square ratio. In fact, the index-normalization correlation also discloses the index-mass accretion rate saturation effect given that the normalization of disk "seed" photon supply is proportional to the disk mass accretion rate. We present arguments that this observationally established index saturation effect is a signature of the bulk motion (converging) flow onto black hole which was early predicted by the dynamical Comptonization theory. We use GRO J1655-40 as a primary reference source for which the BH mass, distance and inclination angle are evaluated by dynamical measurements with unprecedented precision among other Galactic BH sources. We apply our scaling technique to determine BH masses and distances forCygnus X-1, GX 339-4, 4U 1543-47, XTE J1550-564, XTE J1650-500, H 1743-322 and XTE J1859-226. Good agreement of our results for sources with known values of BH masses and distance provides an independent verification for our scaling technique.Comment: 25 pages, 9 figures, 5 tables. Accepted and scheduled for publication in The Astrophysical Journa

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments

    Get PDF
    Advances in sequencing techniques have led to exponential growth in biological data, demanding the development of large-scale bioinformatics experiments. Because these experiments are computation- and data-intensive, they require high-performance computing (HPC) techniques and can benefit from specialized technologies such as Scientific Workflow Management Systems (SWfMS) and databases. In this work, we present BioWorkbench, a framework for managing and analyzing bioinformatics experiments. This framework automatically collects provenance data, including both performance data from workflow execution and data from the scientific domain of the workflow application. Provenance data can be analyzed through a web application that abstracts a set of queries to the provenance database, simplifying access to provenance information. We evaluate BioWorkbench using three case studies: SwiftPhylo, a phylogenetic tree assembly workflow; SwiftGECKO, a comparative genomics workflow; and RASflow, a RASopathy analysis workflow. We analyze each workflow from both computational and scientific domain perspectives, by using queries to a provenance and annotation database. Some of these queries are available as a pre-built feature of the BioWorkbench web application. Through the provenance data, we show that the framework is scalable and achieves high-performance, reducing up to 98% of the case studies execution time. We also show how the application of machine learning techniques can enrich the analysis process

    RED-PL, a Method for Deriving Product Requirements from a Product Line Requirements Model

    No full text
    International audienceSoftware product lines (SPL) modeling has proven to be an effective approach to reuse in software development. Several variability approaches were developed to plan requirements reuse, but only little of them actually address the issue of deriving product requirements. Indeed, while the modeling approaches sell on requirements reuse, the associated derivation techniques actually focus on deriving and reusing technical product data.This paper presents a method that intends to support requirements derivation.Its underlying principle is to take advantage of approaches made for reuse PL requirements and to complete them by a requirements development process by reuse for single products. The proposed approach matches users' product requirements with PL requirements models and derives a collection ofrequirements that is (i) consistent, and (ii) optimal with respect to users' priorities and company's constraints. The proposed methodological process was validated in an industrial setting by considering the requirement engineering phase of a product line of blood analyzers
    • …
    corecore