5,613 research outputs found
A compositional method for reliability analysis of workflows affected by multiple failure modes
We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach
Automated Instruction Stream Throughput Prediction for Intel and AMD Microarchitectures
An accurate prediction of scheduling and execution of instruction streams is
a necessary prerequisite for predicting the in-core performance behavior of
throughput-bound loop kernels on out-of-order processor architectures. Such
predictions are an indispensable component of analytical performance models,
such as the Roofline and the Execution-Cache-Memory (ECM) model, and allow a
deep understanding of the performance-relevant interactions between hardware
architecture and loop code. We present the Open Source Architecture Code
Analyzer (OSACA), a static analysis tool for predicting the execution time of
sequential loops comprising x86 instructions under the assumption of an
infinite first-level cache and perfect out-of-order scheduling. We show the
process of building a machine model from available documentation and
semi-automatic benchmarking, and carry it out for the latest Intel Skylake and
AMD Zen micro-architectures. To validate the constructed models, we apply them
to several assembly kernels and compare runtime predictions with actual
measurements. Finally we give an outlook on how the method may be generalized
to new architectures.Comment: 11 pages, 4 figures, 7 table
A Standard Platform for Testing and Comparison of MDAO Architectures
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant
Recommended from our members
Computational Strategies for Scalable Genomics Analysis.
The revolution in next-generation DNA sequencing technologies is leading to explosive data growth in genomics, posing a significant challenge to the computing infrastructure and software algorithms for genomics analysis. Various big data technologies have been explored to scale up/out current bioinformatics solutions to mine the big genomics data. In this review, we survey some of these exciting developments in the applications of parallel distributed computing and special hardware to genomics. We comment on the pros and cons of each strategy in the context of ease of development, robustness, scalability, and efficiency. Although this review is written for an audience from the genomics and bioinformatics fields, it may also be informative for the audience of computer science with interests in genomics applications
Advanced Cyberinfrastructure for Science, Engineering, and Public Policy
Progress in many domains increasingly benefits from our ability to view the
systems through a computational lens, i.e., using computational abstractions of
the domains; and our ability to acquire, share, integrate, and analyze
disparate types of data. These advances would not be possible without the
advanced data and computational cyberinfrastructure and tools for data capture,
integration, analysis, modeling, and simulation. However, despite, and perhaps
because of, advances in "big data" technologies for data acquisition,
management and analytics, the other largely manual, and labor-intensive aspects
of the decision making process, e.g., formulating questions, designing studies,
organizing, curating, connecting, correlating and integrating crossdomain data,
drawing inferences and interpreting results, have become the rate-limiting
steps to progress. Advancing the capability and capacity for evidence-based
improvements in science, engineering, and public policy requires support for
(1) computational abstractions of the relevant domains coupled with
computational methods and tools for their analysis, synthesis, simulation,
visualization, sharing, and integration; (2) cognitive tools that leverage and
extend the reach of human intellect, and partner with humans on all aspects of
the activity; (3) nimble and trustworthy data cyber-infrastructures that
connect, manage a variety of instruments, multiple interrelated data types and
associated metadata, data representations, processes, protocols and workflows;
and enforce applicable security and data access and use policies; and (4)
organizational and social structures and processes for collaborative and
coordinated activity across disciplinary and institutional boundaries.Comment: A Computing Community Consortium (CCC) white paper, 9 pages. arXiv
admin note: text overlap with arXiv:1604.0200
- …