11,502 research outputs found

    A Flexible and Modular Framework for Implementing Infrastructures for Global Computing

    Get PDF
    We present a Java software framework for building infrastructures to support the development of applications for systems where mobility and network awareness are key issues. The framework is particularly useful to develop run-time support for languages oriented towards global computing. It enables platform designers to customize communication protocols and network architectures and guarantees transparency of name management and code mobility in distributed environments. The key features are illustrated by means of a couple of simple case studies

    A Workflow for Fast Evaluation of Mapping Heuristics Targeting Cloud Infrastructures

    Full text link
    Resource allocation is today an integral part of cloud infrastructures management to efficiently exploit resources. Cloud infrastructures centers generally use custom built heuristics to define the resource allocations. It is an immediate requirement for the management tools of these centers to have a fast yet reasonably accurate simulation and evaluation platform to define the resource allocation for cloud applications. This work proposes a framework allowing users to easily specify mappings for cloud applications described in the AMALTHEA format used in the context of the DreamCloud European project and to assess the quality for these mappings. The two quality metrics provided by the framework are execution time and energy consumption.Comment: 2nd International Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing DREAMCloud 2016 (arXiv:cs/1601.04675

    Secure, reliable and dynamic access to distributed clinical data

    Get PDF
    An abundance of statistical and scientific data exists in the area of clinical and epidemiological studies. Much of this data is distributed across regional, national and international boundaries with different policies on access and usage, and a multitude of different schemata for the data often complicated by the variety of supporting clinical coding schemes. This prevents the wide scale collation and analysis of such data as is often needed to infer clinical outcomes and to determine the often moderate effect of drugs. Through grid technologies it is possible to overcome the barriers introduced by distribution of heterogeneous data and services. However reliability, dynamicity and fine-grained security are essential in this domain, and are not typically offered by current grids. The MRC funded VOTES project (Virtual Organisations for Trials and Epidemiological Studies) has implemented a prototype infrastructure specifically designed to meet these challenges. This paper describes this on-going implementation effort and the lessons learned in building grid frameworks for and within a clinical environment

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Challenges for the comprehensive management of cloud services in a PaaS framework

    Full text link
    The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies

    Modular architecture providing convergent and ubiquitous intelligent connectivity for networks beyond 2030

    Get PDF
    The transition of the networks to support forthcoming beyond 5G (B5G) and 6G services introduces a number of important architectural challenges that force an evolution of existing operational frameworks. Current networks have introduced technical paradigms such as network virtualization, programmability and slicing, being a trend known as network softwarization. Forthcoming B5G and 6G services imposing stringent requirements will motivate a new radical change, augmenting those paradigms with the idea of smartness, pursuing an overall optimization on the usage of network and compute resources in a zero-trust environment. This paper presents a modular architecture under the concept of Convergent and UBiquitous Intelligent Connectivity (CUBIC), conceived to facilitate the aforementioned transition. CUBIC intends to investigate and innovate on the usage, combination and development of novel technologies to accompany the migration of existing networks towards Convergent and Ubiquitous Intelligent Connectivity (CUBIC) solutions, leveraging Artificial Intelligence (AI) mechanisms and Machine Learning (ML) tools in a totally secure environment

    Hydrological Models as Web Services: An Implementation using OGC Standards

    No full text
    <p>Presentation for the HIC 2012 - 10th International Conference on Hydroinformatics. "Understanding Changing Climate and Environment and Finding Solutions" Hamburg, Germany July 14-18, 2012</p> <p> </p

    Agent-based techniques for National Infrastructure Simulation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.Includes bibliographical references (leaves 35-37).Modern society is dependent upon its networks of infrastructure. These networks have grown in size and complexity to become interdependent, creating within them hidden vulnerabilities. The critical nature of these infrastructures has led to the establishment of the National Infrastructure Simulation and Analysis Center (NISAC) by the United States Government. The goal of NISAC is to provide the simulation capability to understand infrastructure interdependencies, detect vulnerabilities, and provide infrastructure planning and crises response assistance. This thesis examines recent techniques for simulation and analyzes their suitability for the national infrastructure simulation problem. Variable and agent-based simulation models are described and compared. The bottom-up approach of the agent-based model is found to be more suitable than the top-down approach of the variable-based model. Supercomputer and distributed, or grid computing solutions are explored. Both are found to be valid solutions and have complimentary strengths. Software architectures for implementation such as the traditional object-oriented approach and the web service model are examined. Solutions to meet NISAC objectives using the agent-based simulation model implemented with web services and a combination of hardware configurations are proposed.by Kenny Lin.S.M
    corecore