169,479 research outputs found

    Agent-based modeling: a systematic assessment of use cases and requirements for enhancing pharmaceutical research and development productivity.

    Get PDF
    A crisis continues to brew within the pharmaceutical research and development (R&D) enterprise: productivity continues declining as costs rise, despite ongoing, often dramatic scientific and technical advances. To reverse this trend, we offer various suggestions for both the expansion and broader adoption of modeling and simulation (M&S) methods. We suggest strategies and scenarios intended to enable new M&S use cases that directly engage R&D knowledge generation and build actionable mechanistic insight, thereby opening the door to enhanced productivity. What M&S requirements must be satisfied to access and open the door, and begin reversing the productivity decline? Can current methods and tools fulfill the requirements, or are new methods necessary? We draw on the relevant, recent literature to provide and explore answers. In so doing, we identify essential, key roles for agent-based and other methods. We assemble a list of requirements necessary for M&S to meet the diverse needs distilled from a collection of research, review, and opinion articles. We argue that to realize its full potential, M&S should be actualized within a larger information technology framework--a dynamic knowledge repository--wherein models of various types execute, evolve, and increase in accuracy over time. We offer some details of the issues that must be addressed for such a repository to accrue the capabilities needed to reverse the productivity decline

    Gateway Modeling and Simulation Plan

    Get PDF
    This plan institutes direction across the Gateway Program and the Element Projects to ensure that Cross Program M&S are produced in a manner that (1) generate the artifacts required for NASA-STD-7009 compliance, (2) ensures interoperability of M&S exchanged and integrated across the program and, (3) drives integrated development efforts to provide cross-domain integrated simulation of the Gateway elements, space environment, and operational scenarios. This direction is flowed down via contractual enforcement to prime contractors and includes both the GMS requirements specified in this plan and the NASASTD- 7009 derived requirements necessary for compliance. Grounding principles for management of Gateway Models and Simulations (M&S) are derived from the Columbia Accident Investigation Board (CAIB) report and the Diaz team report, A Renewed Commitment to Excellence. As an outcome of these reports, and in response to Action 4 of the Diaz team report, the NASA Standard for Models and Simulations, NASA-STD-7009 was developed. The standard establishes M&S requirements for development and use activities to ensure proper capture and communication of M&S pedigree and credibility information to Gateway program decision makers. Through the course of the Gateway program life cycle M&S will be heavily relied upon to conduct analysis, test products, support operations activities, enable informed decision making and ultimately to certify the Gateway with an acceptable level of risk to crew and mission. To reduce risk associated with M&S influenced decisions, this plan applies the NASA-STD-7009 requirements to produce the artifacts that support credibility assessments and ensure the information is communicated to program management

    Innovative teaching of IC design and manufacture using the Superchip platform

    No full text
    In this paper we describe how an intelligent chip architecture has allowed a large cohort of undergraduate students to be given effective practical insight into IC design by designing and manufacturing their own ICs. To achieve this, an efficient chip architecture, the “Superchip”, has been developed, which allows multiple student designs to be fabricated on a single IC, and encapsulated in a standard package without excessive cost in terms of time or resources. We demonstrate how the practical process has been tightly coupled with theoretical aspects of the degree course and how transferable skills are incorporated into the design exercise. Furthermore, the students are introduced at an early stage to the key concepts of team working, exposure to real deadlines and collaborative report writing. This paper provides details of the teaching rationale, design exercise overview, design process, chip architecture and test regime

    The prospect of using LES and DES in engineering design, and the research required to get there

    Full text link
    In this paper we try to look into the future to divine how large eddy and detached eddy simulations (LES and DES, respectively) will be used in the engineering design process about 20-30 years from now. Some key challenges specific to the engineering design process are identified, and some of the critical outstanding problems and promising research directions are discussed.Comment: accepted for publication in the Royal Society Philosophical Transactions

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Power aware early design stage hardware software co-optimization

    Get PDF
    Co-optimizing hardware and software can lead to substantial performance and energy benefits, and is becoming an increasingly important design paradigm. In scientific computing, power constraints increasingly necessitate the return to specialized chips such as Intel’s MIC or IBM’s Blue-Gene architectures. To enable hardware/software co-design in early stages of the design cycle, we propose a simulation infrastructure methodology by combining high-abstraction performance simulation using Sniper with power modeling using McPAT and custom DRAM power models. Sniper/McPAT is fast — simulation speed is around 2 MIPS on an 8-core host machine — because it uses analytical modeling to abstract away core performance during multi-core simulation. We demonstrate Sniper/McPAT’s accuracy through validation against real hardware; we report average performance and power prediction errors of 22.1% and 8.3%, respectively, for a set of SPEComp benchmarks

    RPPM : Rapid Performance Prediction of Multithreaded workloads on multicore processors

    Get PDF
    Analytical performance modeling is a useful complement to detailed cycle-level simulation to quickly explore the design space in an early design stage. Mechanistic analytical modeling is particularly interesting as it provides deep insight and does not require expensive offline profiling as empirical modeling. Previous work in mechanistic analytical modeling, unfortunately, is limited to single-threaded applications running on single-core processors. This work proposes RPPM, a mechanistic analytical performance model for multi-threaded applications on multicore hardware. RPPM collects microarchitecture-independent characteristics of a multi-threaded workload to predict performance on a previously unseen multicore architecture. The profile needs to be collected only once to predict a range of processor architectures. We evaluate RPPM's accuracy against simulation and report a performance prediction error of 11.2% on average (23% max). We demonstrate RPPM's usefulness for conducting design space exploration experiments as well as for analyzing parallel application performance

    Modeling of Induced Hydraulically Fractured Wells in Shale Reservoirs Using Branched Fractals

    Get PDF
    Imperial Users onl

    Mechanistic modeling of architectural vulnerability factor

    Get PDF
    Reliability to soft errors is a significant design challenge in modern microprocessors owing to an exponential increase in the number of transistors on chip and the reduction in operating voltages with each process generation. Architectural Vulnerability Factor (AVF) modeling using microarchitectural simulators enables architects to make informed performance, power, and reliability tradeoffs. However, such simulators are time-consuming and do not reveal the microarchitectural mechanisms that influence AVF. In this article, we present an accurate first-order mechanistic analytical model to compute AVF, developed using the first principles of an out-of-order superscalar execution. This model provides insight into the fundamental interactions between the workload and microarchitecture that together influence AVF. We use the model to perform design space exploration, parametric sweeps, and workload characterization for AVF
    • …
    corecore