11,944 research outputs found

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202

    Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing

    Get PDF
    High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms. We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them

    Driving Innovation During Times of Growth

    Get PDF
    As the official coverage provider, the Cornell HR Review covered the keynote and panel discussions at the Human Capital Association’s (HCA) 9th Annual Symposium. The HCA is a student run organization within Cornell’s Johnson School and School of Industrial and Labor Relations, which strives to drive the future of the HR profession through educational and professional development opportunities across the Cornell community. The symposium provides a forum for students, faculty and corporate executives to explore the various dimensions of human capital issues prevalent in global business. This year’s symposium topic focused on driving innovation proactively through human resources and across organizations as we recover from the economic crisis of the past several years

    Creating business value from big data and business analytics : organizational, managerial and human resource implications

    Get PDF
    This paper reports on a research project, funded by the EPSRC’s NEMODE (New Economic Models in the Digital Economy, Network+) programme, explores how organizations create value from their increasingly Big Data and the challenges they face in doing so. Three case studies are reported of large organizations with a formal business analytics group and data volumes that can be considered to be ‘big’. The case organizations are MobCo, a mobile telecoms operator, MediaCo, a television broadcaster, and CityTrans, a provider of transport services to a major city. Analysis of the cases is structured around a framework in which data and value creation are mediated by the organization’s business analytics capability. This capability is then studied through a sociotechnical lens of organization/management, process, people, and technology. From the cases twenty key findings are identified. In the area of data and value creation these are: 1. Ensure data quality, 2. Build trust and permissions platforms, 3. Provide adequate anonymization, 4. Share value with data originators, 5. Create value through data partnerships, 6. Create public as well as private value, 7. Monitor and plan for changes in legislation and regulation. In organization and management: 8. Build a corporate analytics strategy, 9. Plan for organizational and cultural change, 10. Build deep domain knowledge, 11. Structure the analytics team carefully, 12. Partner with academic institutions, 13. Create an ethics approval process, 14. Make analytics projects agile, 15. Explore and exploit in analytics projects. In technology: 16. Use visualization as story-telling, 17. Be agnostic about technology while the landscape is uncertain (i.e., maintain a focus on value). In people and tools: 18. Data scientist personal attributes (curious, problem focused), 19. Data scientist as ‘bricoleur’, 20. Data scientist acquisition and retention through challenging work. With regards to what organizations should do if they want to create value from their data the paper further proposes: a model of the analytics eco-system that places the business analytics function in a broad organizational context; and a process model for analytics implementation together with a six-stage maturity model

    Principles in Patterns (PiP) : Piloting of C-CAP - Evaluation of Impact and Implications for System and Process Development

    Get PDF
    The Principles in Patterns (PiP) project is leading a programme of innovation and development work intended to explore and develop new technology-supported approaches to curriculum design, approval and review. It is anticipated that such technology-supported approaches can improve the efficacy of curriculum approval processes at higher education (HE) institutions, thereby improving curriculum responsiveness and enabling improved and rapid review mechanisms which may produce enhancements to pedagogy. Curriculum design in HE is a key "teachable moment" and often remains one of the few occasions when academics will plan and structure their intended teaching. Technology-supported curriculum design therefore presents an opportunity for improving academic quality, pedagogy and learning impact. Approaches that are innovative in their use of technology offer the promise of an interactive curriculum design process within which the designer is offered system assistance to better adhere to pedagogical best practice, is exposed to novel and high impact learning designs from which to draw inspiration, and benefits from system support to detect common design issues, many of which can delay curriculum approval and distract academic quality teams from monitoring substantive academic issues. This strand of the PiP evaluation (WP7:38) attempts to understand the impact of the PiP Class and Course Approval Pilot (C-CAP) system within specific stakeholder groups and seeks to understand the extent to which C-CAP is considered to support process improvements. As process improvements and changes were studied in a largely quantitative capacity during a previous but related evaluative strand, this strand includes the gathering of additional qualitative data to better understand and verify the business process improvements and change effected by C-CAP. This report therefore summarises the outcome of C-CAP piloting within a University faculty, presents the methodology used for evaluation, and the associated analysis and discussion. More generally this report constitutes an additional evaluative contribution towards a wider understanding of technology-supported approaches to curriculum design and approval in HE institutions and their potential in improving process transparency, efficiency and effectiveness
    • …
    corecore