686 research outputs found

    Hardware as a service - enabling dynamic, user-level bare metal provisioning of pools of data center resources.

    Full text link
    We describe a “Hardware as a Service (HaaS)” tool for isolating pools of compute, storage and networking resources. The goal of HaaS is to enable dynamic and flexible, user-level provisioning of pools of resources at the so-called “bare-metal” layer. It allows experimental or untrusted services to co-exist alongside trusted services. By functioning only as a resource isolation system, users are free to choose between different system scheduling and provisioning systems and to manage isolated resources as they see fit. We describe key HaaS use cases and features. We show how HaaS can provide a valuable, and somehwat overlooked, layer in the software architecture of modern data center management. Documentation and source code for HaaS software are available at: https://github.com/CCI-MOC/haasPartial support for this work was provided by the MassTech Collaborative Research Matching Grant Program, National Science Foundation award #1347525 and several commercial partners of the Mass Open Cloud who may be found at http://www.massopencloud.org.http://www.ieee-hpec.org/2014/CD/index_htm_files/FinalPapers/116.pd

    Peer-to-Peer Based Trading and File Distribution for Cloud Computing

    Get PDF
    In this dissertation we take a peer-to-peer approach to deal with two specific issues, fair trading and file distribution, arisen from data management for cloud computing. In mobile cloud computing environment cloud providers may collaborate with each other and essentially organize some dedicated resources as a peer to peer sharing system. One well-known problem in such peer to peer systems with exchange of resources is free riding. Providing incentives for peers to contribute to the system is an important issue in peer to peer systems. We design a reputation-based fair trading mechanism that favors peers with higher reputation. Based on the definition of the reputation used in the system, we derive a fair trading policy. We evaluate the performance of reputation-based trading mechanisms and highlight the scenarios in which they can make a difference. Distribution of data to the resources within a cloud or to different collaborating clouds efficiently is another issue in cloud computing. The delivery efficiency is dependent on the characteristics of the network links available among these network nodes and the mechanism that takes advantage of them. Our study is based on the Global Environment for Network Innovations (GENI), a testbed for researchers to build a virtual laboratory at scale to explore future Internets. Our study consists of two parts. First, we characterize the links in the GENI network. Even though GENI has been used in many research and education projects, there is no systematic study about what we can expect from the GENI testbeds from a performance perspective. The goal is to characterize the links of the GENI networks and provide guidance for GENI experiments. Second, we propose a peer to peer approach to file distribution for cloud computing. We develop a mechanism that uses multiple delivery trees as the distribution structure, which takes into consideration the measured performance information in the GENI network. Files are divided into chunks to improve parallelism among different delivery trees. With a strict scheduling mechanism for each chunk, we can reduce the overall time for getting the file to all relevant nodes. We evaluate the proposed mechanism and show that our mechanism can significantly reduce the overall delivery time

    Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications

    Full text link
    Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.Comment: Tp appear in the CCNC 2019 Conferenc

    NETEMBED: A Network Resource Mapping Service for Distributed Applications

    Full text link
    Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.National Science Foundation (CNS Cybertrust 0524477, NSF CNS NeTS 0520166, NSF CNS ITR 0205294, EIA RI 0202067

    Dovetail: Stronger Anonymity in Next-Generation Internet Routing

    Full text link
    Current low-latency anonymity systems use complex overlay networks to conceal a user's IP address, introducing significant latency and network efficiency penalties compared to normal Internet usage. Rather than obfuscating network identity through higher level protocols, we propose a more direct solution: a routing protocol that allows communication without exposing network identity, providing a strong foundation for Internet privacy, while allowing identity to be defined in those higher level protocols where it adds value. Given current research initiatives advocating "clean slate" Internet designs, an opportunity exists to design an internetwork layer routing protocol that decouples identity from network location and thereby simplifies the anonymity problem. Recently, Hsiao et al. proposed such a protocol (LAP), but it does not protect the user against a local eavesdropper or an untrusted ISP, which will not be acceptable for many users. Thus, we propose Dovetail, a next-generation Internet routing protocol that provides anonymity against an active attacker located at any single point within the network, including the user's ISP. A major design challenge is to provide this protection without including an application-layer proxy in data transmission. We address this challenge in path construction by using a matchmaker node (an end host) to overlap two path segments at a dovetail node (a router). The dovetail then trims away part of the path so that data transmission bypasses the matchmaker. Additional design features include the choice of many different paths through the network and the joining of path segments without requiring a trusted third party. We develop a systematic mechanism to measure the topological anonymity of our designs, and we demonstrate the privacy and efficiency of our proposal by simulation, using a model of the complete Internet at the AS-level

    Rate Optimal design of a Wireless Backhaul Network using TV White Space

    Full text link
    The penetration of wireless broadband services in remote areas has primarily been limited due to the lack of economic incentives that service providers encounter in sparsely populated areas. Besides, wireless backhaul links like satellite and microwave are either expensive or require strict line of sight communication making them unattractive. TV white space channels with their desirable radio propagation characteristics can provide an excellent alternative for engineering backhaul networks in areas that lack abundant infrastructure. Specifically, TV white space channels can provide "free wireless backhaul pipes" to transport aggregated traffic from broadband sources to fiber access points. In this paper, we investigate the feasibility of multi-hop wireless backhaul in the available white space channels by using noncontiguous Orthogonal Frequency Division Multiple Access (NC-OFDMA) transmissions between fixed backhaul towers. Specifically, we consider joint power control, scheduling and routing strategies to maximize the minimum rate across broadband towers in the network. Depending on the population density and traffic demands of the location under consideration, we discuss the suitable choice of cell size for the backhaul network. Using the example of available TV white space channels in Wichita, Kansas (a small city located in central USA), we provide illustrative numerical examples for designing such wireless backhaul network

    Design of an emulation framework for evaluating large-scale open content aware networks

    Get PDF
    The popularity of multimedia services has resulted in new revenue opportunities for network and service providers but has also introduced important new challenges. The large amount of resources and stringent quality requirements imposed by multimedia services has triggered the need for open content aware networks, where specific management algorithms that optimize the delivery of multimedia services can be dynamically plugged in when required. In the past, a plethora of algorithms have been proposed ranging from specific cache algorithms to video client heuristics that are optimized for a specific multimedia service type and its corresponding delivery. However, it remains difficult to accurately characterize the performance of these algorithms and investigate the impact of an actual deployment in multimedia services. In this paper, we present a framework that allows evaluating the performance of such algorithms for open content aware networks. The proposed evaluation framework has two important advantages. First, it performs an emulation of the novel algorithms instead of using a simulation approach, which is often carried out to characterize performance. Second, the emulation framework allows evaluating the impact of combining different multimedia algorithms with each other. We present the architecture of the emulation framework and discuss the main software components used. Furthermore, we present a performance evaluation of an illustrative use case, which identifies the need for emulation-based evaluation
    corecore