91 research outputs found

    nelli: a lightweight frontend for MLIR

    Full text link
    Multi-Level Intermediate Representation (MLIR) is a novel compiler infrastructure that aims to provide modular and extensible components to facilitate building domain specific compilers. However, since MLIR models programs at an intermediate level of abstraction, and most extant frontends are at a very high level of abstraction, the semantics and mechanics of the fundamental transformations available in MLIR are difficult to investigate and employ in and of themselves. To address these challenges, we have developed \texttt{nelli}, a lightweight, Python-embedded, domain-specific, language for generating MLIR code. \texttt{nelli} leverages existing MLIR infrastructure to develop Pythonic syntax and semantics for various MLIR features. We describe \texttt{nelli}'s design goals, discuss key details of our implementation, and demonstrate how \texttt{nelli} enables easily defining and lowering compute kernels to diverse hardware platforms

    Reputation Description and Interpretation

    Get PDF
    Reputation is an opinion held by others about a particular person, group, organisation, or resource. As a tool, reputation can be used to forecast the reliability of others based on their previous actions, moreover, in some domains it can even be used to estimate trustworthiness. Due to the large scale of virtual communities it is impossible to maintain a meaningful relationship with every member. Reputation systems are designed explicitly to manufacture trust within a virtual community by recording and sharing information regarding past interactions. Reputation systems are becoming increasingly popular and widespread, with the information generated varying considerably between domains. Currently, no formal method to exchange reputation information exists. However, the OpenRep framework, currently under development, is designed to federate reputation information, enabling the transparent exchange of information between reputation systems. This thesis presents a reputation description and interpretation system, designed as a foundation for the OpenRep framework. The description and interpretation system focuses on enabling the consistent and reliable expression and interpretation of reputation information across heterogeneous reputation systems. The description and interpretation system includes a strongly typed language, a verification system to validate usage of the language, and a XML based exchange protocol. In addition to these contributions, three case studies are presented as a means of generating requirements for the description and interpretation system, and evaluating the use of the proposed system in a federated reputation environment. The case studies include an electronic auction, virtual community and social network based relationship management service

    Improving the Performance of Cloud-based Scientific Services

    No full text
    Cloud computing provides access to a large scale set of readily available computing resources at the click of a button. The cloud paradigm has commoditised computing capacity and is often touted as a low-cost model for executing and scaling applications. However, there are significant technical challenges associated with selecting, acquiring, configuring, and managing cloud resources which can restrict the efficient utilisation of cloud capabilities. Scientific computing is increasingly hosted on cloud infrastructure—in which scientific capabilities are delivered to the broad scientific community via Internet-accessible services. This migration from on-premise to on-demand cloud infrastructure is motivated by the sporadic usage patterns of scientific workloads and the associated potential cost savings without the need to purchase, operate, and manage compute infrastructure—a task that few scientific users are trained to perform. However, cloud platforms are not an automatic solution. Their flexibility is derived from an enormous number of services and configuration options, which in turn result in significant complexity for the user. In fact, naïve cloud usage can result in poor performance and excessive costs, which are then directly passed on to researchers. This thesis presents methods for developing efficient cloud-based scientific services. Three real-world scientific services are analysed and a set of common requirements are derived. To address these requirements, this thesis explores automated and scalable methods for inferring network performance, considers various trade-offs (e.g., cost and performance) when provisioning instances, and profiles application performance, all in heterogeneous and dynamic cloud environments. Specifically, network tomography provides the mechanisms to infer network performance in dynamic and opaque cloud networks; cost-aware automated provisioning approaches enable services to consider, in real-time, various trade-offs such as cost, performance, and reliability; and automated application profiling allows a huge search space of applications, instance types, and configurations to be analysed to determine resource requirements and application performance. Finally, these contributions are integrated into an extensible and modular cloud provisioning and resource management service called SCRIMP. Cloud-based scientific applications and services can subscribe to SCRIMP to outsource their provisioning, usage, and management of cloud infrastructures. Collectively, the approaches presented in this thesis are shown to provide order of magnitude cost savings and significant performance improvement when employed by production scientific services

    Reputation Description and Interpretation

    No full text
    Reputation is an opinion held by others about a particular person, group, organisation, or resource. As a tool, reputation can be used to forecast the reliability of others based on their previous actions, moreover, in some domains it can even be used to estimate trustworthiness. Due to the large scale of virtual communities it is impossible to maintain a meaningful relationship with every member. Reputation systems are designed explicitly to manufacture trust within a virtual community by recording and sharing information regarding past interactions. Reputation systems are becoming increasingly popular and widespread, with the information generated varying considerably between domains. Currently, no formal method to exchange reputation information exists. However, the OpenRep framework, currently under development, is designed to federate reputation information, enabling the transparent exchange of information between reputation systems. This thesis presents a reputation description and interpretation system, designed as a foundation for the OpenRep framework. The description and interpretation system focuses on enabling the consistent and reliable expression and interpretation of reputation information across heterogeneous reputation systems. The description and interpretation system includes a strongly typed language, a verification system to validate usage of the language, and a XML based exchange protocol. In addition to these contributions, three case studies are presented as a means of generating requirements for the description and interpretation system, and evaluating the use of the proposed system in a federated reputation environment. The case studies include an electronic auction, virtual community and social network based relationship management service

    Final Report to Governors from the Joint Study Committee and Scientific Professionals

    Get PDF
    The intent of this publication of the Arkansas Water Resources Center is to provide a location whereby a final report on water research to a funding entity can be archived. The States of Arkansas and Oklahoma signed the Second Statement of Joint Principles and Actions in 2013 to form a governors’ appointed ‘Joint Study Committee’ to oversee the ‘Joint Study’ and make recommendations on the phosphorus criteria in Oklahoma’s Scenic Rivers. This publication has maintained the original format of the report as submitted to the Governors of Arkansas and Oklahoma

    The Manufacturing Data and Machine Learning Platform: Enabling Real-time Monitoring and Control of Scientific Experiments via IoT

    Full text link
    IoT devices and sensor networks present new opportunities for measuring, monitoring, and guiding scientific experiments. Sensors, cameras, and instruments can be combined to provide previously unachievable insights into the state of ongoing experiments. However, IoT devices can vary greatly in the type, volume, and velocity of data they generate, making it challenging to fully realize this potential. Indeed, synergizing diverse IoT data streams in near-real time can require the use of machine learning (ML). In addition, new tools and technologies are required to facilitate the collection, aggregation, and manipulation of sensor data in order to simplify the application of ML models and in turn, fully realize the utility of IoT devices in laboratories. Here we will demonstrate how the use of the Argonne-developed Manufacturing Data and Machine Learning (MDML) platform can analyze and use IoT devices in a manufacturing experiment. MDML is designed to standardize the research and operational environment for advanced data analytics and AI-enabled automated process optimization by providing the infrastructure to integrate AI in cyber-physical systems for in situ analysis. We will show that MDML is capable of processing diverse IoT data streams, using multiple computing resources, and integrating ML models to guide an experiment.Comment: Two page demonstration paper. Accepted to WFIoT202

    FAIR principles for AI models, with a practical application for accelerated high energy diffraction microscopy

    Full text link
    A concise and measurable set of FAIR (Findable, Accessible, Interoperable and Reusable) principles for scientific data is transforming the state-of-practice for data management and stewardship, supporting and enabling discovery and innovation. Learning from this initiative, and acknowledging the impact of artificial intelligence (AI) in the practice of science and engineering, we introduce a set of practical, concise, and measurable FAIR principles for AI models. We showcase how to create and share FAIR data and AI models within a unified computational framework combining the following elements: the Advanced Photon Source at Argonne National Laboratory, the Materials Data Facility, the Data and Learning Hub for Science, and funcX, and the Argonne Leadership Computing Facility (ALCF), in particular the ThetaGPU supercomputer and the SambaNova DataScale system at the ALCF AI Testbed. We describe how this domain-agnostic computational framework may be harnessed to enable autonomous AI-driven discovery.Comment: 10 pages, 3 figures. Comments welcome
    • …
    corecore