6,692 research outputs found

    Early Observations on Performance of Google Compute Engine for Scientific Computing

    Full text link
    Although Cloud computing emerged for business applications in industry, public Cloud services have been widely accepted and encouraged for scientific computing in academia. The recently available Google Compute Engine (GCE) is claimed to support high-performance and computationally intensive tasks, while little evaluation studies can be found to reveal GCE's scientific capabilities. Considering that fundamental performance benchmarking is the strategy of early-stage evaluation of new Cloud services, we followed the Cloud Evaluation Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon EC2, to help understand the elementary capability of GCE for dealing with scientific problems. The experimental results and analyses show both potential advantages of, and possible threats to applying GCE to scientific computing. For example, compared to Amazon's EC2 service, GCE may better suit applications that require frequent disk operations, while it may not be ready yet for single VM-based parallel computing. Following the same evaluation methodology, different evaluators can replicate and/or supplement this fundamental evaluation of GCE. Based on the fundamental evaluation results, suitable GCE environments can be further established for case studies of solving real science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5, 201

    DoKnowMe: Towards a Domain Knowledgedriven Methodology for Performance Evaluation

    Get PDF
    Software engineering considers performance evaluation to be one of the key portions of software quality assurance. Unfortunately, there seems to be a lack of standard methodologies for performance evaluation even in the scope of experimental computer science. Inspired by the concept of “instantiation” in object-oriented programming, we distinguish the generic performance evaluation logic from the distributed and ad-hoc relevant studies, and develop an abstract evaluation methodology (by analogy of “class”) we name Domain Knowledge-driven Methodology (DoKnowMe). By replacing five predefined domain-specific knowledge artefacts, DoKnowMe can be instantiated into specific methodologies (by analogy of “object”) to guide evaluators in performance evaluation of different software and even computing systems. We also propose a generic validation framework with four indicators (i.e. usefulness, feasibility, effectiveness and repeatability), and use it to validate DoKnowMe in the Cloud services evaluation domain. Given the positive and promising validation result, we plan to integrate more common evaluation strategies to improve DoKnowMe and further focus on the performance evaluation of Cloud autoscaler systems

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Aeronautics and space report of the President, 1980 activities

    Get PDF
    The year's achievements in the areas of communication, Earth resources, environment, space sciences, transportation, and space energy are summarized and current and planned activities in these areas at the various departments and agencies of the Federal Government are summarized. Tables show U.S. and world spacecraft records, spacecraft launchings for 1980, and scientific payload anf probes launched 1975-1980. Budget data are included

    Aeronautics and space report of the President, 1982 activities

    Get PDF
    Achievements of the space program are summerized in the area of communication, Earth resources, environment, space sciences, transportation, aeronautics, and space energy. Space program activities of the various deprtments and agencies of the Federal Government are discussed in relation to the agencies' goals and policies. Records of U.S. and world spacecraft launchings, successful U.S. launches for 1982, U.S. launched applications and scientific satellites and space probes since 1975, U.S. and Soviet manned spaceflights since 1961, data on U.S. space launch vehicles, and budget summaries are provided. The national space policy and the aeronautical research and technology policy statements are included

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    CEEM: A Practical Methodology for Cloud Services Evaluation

    Get PDF
    Abstract—Given an increasing number of Cloud services available in the market, evaluating candidate Cloud services is crucial and beneficial for both service customers (e.g. costbenefit analysis) and providers (e.g. direction of improvement). When it comes to performing any evaluation, a suitable methodology is inevitably required to direct experimental implementations. Nevertheless, there is still a lack of a sound methodology to guide the evaluation of Cloud services. By borrowing the lessons from evaluation of traditional computing systems, referring to the guidelines for Design of Experiments (DOE), and summarizing the existing experiences of real experimental studies, we proposed a generic Cloud Evaluation Experiment Methodology (CEEM) for Cloud services evaluation. Furthermore, we have established a pre-experimental knowledge base and specified corresponding suggestions to make this methodology more practical in the Cloud Computing domain. Through evaluating the Google AppEngine Python runtime as a preliminary validation, we show that Cloud evaluators may achieve more rational and convincing experimental results and conclusions following such an evaluation methodology

    Fiscal year 1981 scientific and technical reports, articles, papers, and presentations

    Get PDF
    This bibliography lists approximately 503 formal NASA technical reports, papers published in technical journals, and presentations by MSFC personnel in FY-1981. It also includes papers of MSFC contractors. Citations announced in the NASA scientific and technical information system are noted
    • …
    corecore