6,660 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Embedding Requirements within the Model Driven Architecture

    Get PDF
    The Model Driven Architecture (MDA) brings benefits to software development, among them the potential for connecting software models with the business domain. This paper focuses on the upstream or Computation Independent Model (CIM) phase of the MDA. Our contention is that, whilst there are many models and notations available within the CIM Phase, those that are currently popular and supported by the Object Management Group (OMG), may not be the most useful notations for business analysts nor sufficient to fully support software requirements and specification. Therefore, with specific emphasis on the value of the Business Process Modelling Notation (BPMN) for business analysts, this paper provides an example of a typical CIM approach before describing an approach which incorporates specific requirements techniques. A framework extension to the MDA is then introduced; which embeds requirements and specification within the CIM, thus further enhancing the utility of MDA by providing a more complete method for business analysis

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Sharing a conceptual model of grid resources and services

    Full text link
    Grid technologies aim at enabling a coordinated resource-sharing and problem-solving capabilities over local and wide area networks and span locations, organizations, machine architectures and software boundaries. The heterogeneity of involved resources and the need for interoperability among different grid middlewares require the sharing of a common information model. Abstractions of different flavors of resources and services and conceptual schemas of domain specific entities require a collaboration effort in order to enable a coherent information services cooperation. With this paper, we present the result of our experience in grid resources and services modelling carried out within the Grid Laboratory Uniform Environment (GLUE) effort, a joint US and EU High Energy Physics projects collaboration towards grid interoperability. The first implementation-neutral agreement on services such as batch computing and storage manager, resources such as the hierarchy cluster, sub-cluster, host and the storage library are presented. Design guidelines and operational results are depicted together with open issues and future evolutions.Comment: 4 pages, 0 figures, CHEP 200

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Dynamic Model-based Management of Service-Oriented Infrastructure.

    Get PDF
    Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation

    Scenarios for the development of smart grids in the UK: literature review

    Get PDF
    Smart grids are expected to play a central role in any transition to a low-carbon energy future, and much research is currently underway on practically every area of smart grids. However, it is evident that even basic aspects such as theoretical and operational definitions, are yet to be agreed upon and be clearly defined. Some aspects (efficient management of supply, including intermittent supply, two-way communication between the producer and user of electricity, use of IT technology to respond to and manage demand, and ensuring safe and secure electricity distribution) are more commonly accepted than others (such as smart meters) in defining what comprises a smart grid. It is clear that smart grid developments enjoy political and financial support both at UK and EU levels, and from the majority of related industries. The reasons for this vary and include the hope that smart grids will facilitate the achievement of carbon reduction targets, create new employment opportunities, and reduce costs relevant to energy generation (fewer power stations) and distribution (fewer losses and better stability). However, smart grid development depends on additional factors, beyond the energy industry. These relate to issues of public acceptability of relevant technologies and associated risks (e.g. data safety, privacy, cyber security), pricing, competition, and regulation; implying the involvement of a wide range of players such as the industry, regulators and consumers. The above constitute a complex set of variables and actors, and interactions between them. In order to best explore ways of possible deployment of smart grids, the use of scenarios is most adequate, as they can incorporate several parameters and variables into a coherent storyline. Scenarios have been previously used in the context of smart grids, but have traditionally focused on factors such as economic growth or policy evolution. Important additional socio-technical aspects of smart grids emerge from the literature review in this report and therefore need to be incorporated in our scenarios. These can be grouped into four (interlinked) main categories: supply side aspects, demand side aspects, policy and regulation, and technical aspects.

    Measuring the Use of the Active and Assisted Living Prototype CARIMO for Home Care Service Users: Evaluation Framework and Results

    Get PDF
    To address the challenges of aging societies, various information and communication technology (ICT)-based systems for older people have been developed in recent years. Currently, the evaluation of these so-called active and assisted living (AAL) systems usually focuses on the analyses of usability and acceptance, while some also assess their impact. Little is known about the actual take-up of these assistive technologies. This paper presents a framework for measuring the take-up by analyzing the actual usage of AAL systems. This evaluation framework covers detailed information regarding the entire process including usage data logging, data preparation, and usage data analysis. We applied the framework on the AAL prototype CARIMO for measuring its take-up during an eight-month field trial in Austria and Italy. The framework was designed to guide systematic, comparable, and reproducible usage data evaluation in the AAL field; however, the general applicability of the framework has yet to be validated
    • …
    corecore