6,876 research outputs found

    DISCO: Distributed Multi-domain SDN Controllers

    Full text link
    Modern multi-domain networks now span over datacenter networks, enterprise networks, customer sites and mobile entities. Such networks are critical and, thus, must be resilient, scalable and easily extensible. The emergence of Software-Defined Networking (SDN) protocols, which enables to decouple the data plane from the control plane and dynamically program the network, opens up new ways to architect such networks. In this paper, we propose DISCO, an open and extensible DIstributed SDN COntrol plane able to cope with the distributed and heterogeneous nature of modern overlay networks and wide area networks. DISCO controllers manage their own network domain and communicate with each others to provide end-to-end network services. This communication is based on a unique lightweight and highly manageable control channel used by agents to self-adaptively share aggregated network-wide information. We implemented DISCO on top of the Floodlight OpenFlow controller and the AMQP protocol. We demonstrated how DISCO's control plane dynamically adapts to heterogeneous network topologies while being resilient enough to survive to disruptions and attacks and providing classic functionalities such as end-point migration and network-wide traffic engineering. The experimentation results we present are organized around three use cases: inter-domain topology disruption, end-to-end priority service request and virtual machine migration

    Data as a Service (DaaS) for sharing and processing of large data collections in the cloud

    Get PDF
    Data as a Service (DaaS) is among the latest kind of services being investigated in the Cloud computing community. The main aim of DaaS is to overcome limitations of state-of-the-art approaches in data technologies, according to which data is stored and accessed from repositories whose location is known and is relevant for sharing and processing. Besides limitations for the data sharing, current approaches also do not achieve to fully separate/decouple software services from data and thus impose limitations in inter-operability. In this paper we propose a DaaS approach for intelligent sharing and processing of large data collections with the aim of abstracting the data location (by making it relevant to the needs of sharing and accessing) and to fully decouple the data and its processing. The aim of our approach is to build a Cloud computing platform, offering DaaS to support large communities of users that need to share, access, and process the data for collectively building knowledge from data. We exemplify the approach from large data collections from health and biology domains.Peer ReviewedPostprint (author's final draft

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner

    Commercial-off-the-shelf simulation package interoperability: Issues and futures

    Get PDF
    Commercial-Off-The-Shelf Simulation Packages (CSPs) are widely used in industry to simulate discrete-event models. Interoperability of CSPs requires the use of distributed simulation techniques. Literature presents us with many examples of achieving CSP interoperability using bespoke solutions. However, for the wider adoption of CSP-based distributed simulation it is essential that, first and foremost, a standard for CSP interoperability be created, and secondly, these standards are adhered to by the CSP vendors. This advanced tutorial is on an emerging standard relating to CSP interoperability. It gives an overview of this standard and presents case studies that implement some of the proposed standards. Furthermore, interoperability is discussed in relation to large and complex models developed using CSPs that require large amount of computing resources. It is hoped that this tutorial will inform the simulation community of the issues associated with CSP interoperability, the importance of these standards and its future

    Development of grid frameworks for clinical trials and epidemiological studies

    Get PDF
    E-Health initiatives such as electronic clinical trials and epidemiological studies require access to and usage of a range of both clinical and other data sets. Such data sets are typically only available over many heterogeneous domains where a plethora of often legacy based or in-house/bespoke IT solutions exist. Considerable efforts and investments are being made across the UK to upgrade the IT infrastructures across the National Health Service (NHS) such as the National Program for IT in the NHS (NPFIT) [1]. However, it is the case that currently independent and largely non-interoperable IT solutions exist across hospitals, trusts, disease registries and GP practices – this includes security as well as more general compute and data infrastructures. Grid technology allows issues of distribution and heterogeneity to be overcome, however the clinical trials domain places special demands on security and data which hitherto the Grid community have not satisfactorily addressed. These challenges are often common across many studies and trials hence the development of a re-usable framework for creation and subsequent management of such infrastructures is highly desirable. In this paper we present the challenges in developing such a framework and outline initial scenarios and prototypes developed within the MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project [2]

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    A JSON Token-Based Authentication and Access Management Schema for Cloud SaaS Applications

    Full text link
    Cloud computing is significantly reshaping the computing industry built around core concepts such as virtualization, processing power, connectivity and elasticity to store and share IT resources via a broad network. It has emerged as the key technology that unleashes the potency of Big Data, Internet of Things, Mobile and Web Applications, and other related technologies, but it also comes with its challenges - such as governance, security, and privacy. This paper is focused on the security and privacy challenges of cloud computing with specific reference to user authentication and access management for cloud SaaS applications. The suggested model uses a framework that harnesses the stateless and secure nature of JWT for client authentication and session management. Furthermore, authorized access to protected cloud SaaS resources have been efficiently managed. Accordingly, a Policy Match Gate (PMG) component and a Policy Activity Monitor (PAM) component have been introduced. In addition, other subcomponents such as a Policy Validation Unit (PVU) and a Policy Proxy DB (PPDB) have also been established for optimized service delivery. A theoretical analysis of the proposed model portrays a system that is secure, lightweight and highly scalable for improved cloud resource security and management.Comment: 6 Page
    • …
    corecore