99 research outputs found

    UKAIRO: internet-scale bandwidth detouring

    Get PDF
    The performance of content distribution on the Internet is crucial for many services. While popular content can be delivered efficiently to users by caching it using content delivery networks, the distribution of less popular content is often constrained by the bandwidth of the Internet path between the content server and the client. Neither can influence the selected path and therefore clients may have to download content along a path that is congested or has limited capacity. We describe UKAIRO, a system that reduces Internet download times by using detour paths with higher TCP throughput. UKAIRO first discovers detour paths among an overlay network of potential detour hosts and then transparently diverts HTTP connections via these hosts to improve the throughput of clients downloading from content servers. Our evaluation shows that by performing infrequent bandwidth measurements between 50 randomly selected PlanetLab hosts, UKAIRO can identify and exploit potential detour paths that increase the median bandwidth to public Internet web servers by up to 80%

    A skewness-aware matrix factorization approach for mesh-structured cloud services

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Online cloud services need to fulfill clients' requests scalably and fast. State-of-the-art cloud services are increasingly deployed as a distributed service mesh. Service to service communication is frequent in the mesh. Unfortunately, problematic events may occur between any pair of nodes in the mesh, therefore, it is vital to maximize the network visibility. A state-of-the-art approach is to model pairwise RTTs based on a latent factor model represented as a low-rank matrix factorization. A latent factor corresponds to a rank-1 component in the factorization model, and is shared by all node pairs. However, different node pairs usually experience a skewed set of hidden factors, which should be fully considered in the model. In this paper, we propose a skewness-aware matrix factorization method named SMF. We decompose the matrix factorization into basic units of rank-one latent factors, and progressively combine rank-one factors for different node pairs. We present a unifying framework to automatically and adaptively select the rank-one factors for each node pair, which not only preserves the low rankness of the matrix model, but also adapts to skewed network latency distributions. Over real-world RTT data sets, SMF significantly improves the relative error by a factor of 0.2 x to 10 x, converges fast and stably, and compactly captures fine-grained local and global network latency structures.Peer ReviewedPostprint (author's final draft

    Improving End-to-End Internet Performance by Detouring

    No full text
    The Internet provides a best-effort service, which gives a robust fault-tolerant network. However, the performance of the paths found in regular Internet routing is suboptimal. As a result, applications rarely achieve all the benefits that the Internet can provide. The problem is made more difficult because the Internet is formed of competing ISPs which have little incentives to reveal information about the performance of Internet paths. As a result, the Internet is sometimes referred as a ‘black-box’. Detouring uses routing overlay networks to find alternative paths (or detour paths) that can improve reliability, latency and bandwidth. Previous work has shown detouring can improve the Internet. However, one important issue remains—how can these detour paths be found without conducting large-scale measurements? In this thesis, we describe practical methods for discovering detour paths to improve specific performance metrics that are scalable to the Internet. Particularly we concentrate our efforts on two metrics, latency and bandwidth, which are arguably the two most important performance metrics for end-user’s applications. Taking advantage of the Internet topology, we show how nodes can learn about segments of Internet paths that can be exploited by detouring leading to reduced path latencies. Next, we investigate bandwidth detouring revealing constructive detour properties and effective mechanisms to detour paths in overlay networks. This leads to Ukairo, our bandwidth detouring platform that is scalable to the Internet and tcpChiryo, which predicts bandwidth in an overlay network through measuring a small portion of the network

    A new, evidence-based, theory for knowledge reuse in security risk analysis

    Get PDF
    Security risk analysis (SRA) is a key activity in software engineering but requires heavy manual effort. Community knowledge in the form of security patterns or security catalogs can be used to support the identification of threats and security controls. However, no evidence-based theory exists about the effectiveness of security catalogs when used for security risk analysis. We adopt a grounded theory approach to propose a conceptual, revised and refined theory of SRA knowledge reuse. The theory refinement is backed by evidence gathered from conducting interviews with experts (20) and controlled experiments with both experts (15) and novice analysts (18). We conclude the paper by providing insights into the use of catalogs and managerial implications

    Network-on-Chip

    Get PDF
    Limitations of bus-based interconnections related to scalability, latency, bandwidth, and power consumption for supporting the related huge number of on-chip resources result in a communication bottleneck. These challenges can be efficiently addressed with the implementation of a network-on-chip (NoC) system. This book gives a detailed analysis of various on-chip communication architectures and covers different areas of NoCs such as potentials, architecture, technical challenges, optimization, design explorations, and research directions. In addition, it discusses current and future trends that could make an impactful and meaningful contribution to the research and design of on-chip communications and NoC systems

    Cross-VM network attacks & their countermeasures within cloud computing environments

    Get PDF
    Cloud computing is a contemporary model in which the computing resources are dynamically scaled-up and scaled-down to customers, hosted within large-scale multi-tenant systems. These resources are delivered as improved, cost-effective and available upon request to customers. As one of the main trends of IT industry in modern ages, cloud computing has extended momentum and started to transform the mode enterprises build and offer IT solutions. The primary motivation in using cloud computing model is cost-effectiveness. These motivations can compel Information and Communication Technologies (ICT) organizations to shift their sensitive data and critical infrastructure on cloud environments. Because of the complex nature of underlying cloud infrastructure, the cloud environments are facing a large number of challenges of misconfigurations, cyber-attacks, root-kits, malware instances etc which manifest themselves as a serious threat to cloud environments. These threats noticeably decline the general trustworthiness, reliability and accessibility of the cloud. Security is the primary concern of a cloud service model. However, a number of significant challenges revealed that cloud environments are not as much secure as one would expect. There is also a limited understanding regarding the offering of secure services in a cloud model that can counter such challenges. This indicates the significance of the fact that what establishes the threat in cloud model. One of the main threats in a cloud model is of cost-effectiveness, normally cloud providers reduce cost by sharing infrastructure between multiple un-trusted VMs. This sharing has also led to several problems including co-location attacks. Cloud providers mitigate co-location attacks by introducing the concept of isolation. Due to this, a guest VM cannot interfere with its host machine, and with other guest VMs running on the same system. Such isolation is one of the prime foundations of cloud security for major public providers. However, such logical boundaries are not impenetrable. A myriad of previous studies have demonstrated how co-resident VMs could be vulnerable to attacks through shared file systems, cache side-channels, or through compromising of hypervisor layer using rootkits. Thus, the threat of cross-VM attacks is still possible because an attacker uses one VM to control or access other VMs on the same hypervisor. Hence, multiple methods are devised for strategic VM placement in order to exploit co-residency. Despite the clear potential for co-location attacks for abusing shared memory and disk, fine grained cross-VM network-channel attacks have not yet been demonstrated. Current network based attacks exploit existing vulnerabilities in networking technologies, such as ARP spoofing and DNS poisoning, which are difficult to use for VM-targeted attacks. The most commonly discussed network-based challenges focus on the fact that cloud providers place more layers of isolation between co-resided VMs than in non-virtualized settings because the attacker and victim are often assigned to separate segmentation of virtual networks. However, it has been demonstrated that this is not necessarily sufficient to prevent manipulation of a victim VM’s traffic. This thesis presents a comprehensive method and empirical analysis on the advancement of co-location attacks in which a malicious VM can negatively affect the security and privacy of other co-located VMs as it breaches the security perimeter of the cloud model. In such a scenario, it is imperative for a cloud provider to be able to appropriately secure access to the data such that it reaches to the appropriate destination. The primary contribution of the work presented in this thesis is to introduce two innovative attack models in leading cloud models, impersonation and privilege escalation, that successfully breach the security perimeter of cloud models and also propose countermeasures that block such types of attacks. The attack model revealed in this thesis, is a combination of impersonation and mirroring. This experimental setting can exploit the network channel of cloud model and successfully redirects the network traffic of other co-located VMs. The main contribution of this attack model is to find a gap in the contemporary network cloud architecture that an attacker can exploit. Prior research has also exploited the network channel using ARP poisoning, spoofing but all such attack schemes have been countered as modern cloud providers place more layers of security features than in preceding settings. Impersonation relies on the already existing regular network devices in order to mislead the security perimeter of the cloud model. The other contribution presented of this thesis is ‘privilege escalation’ attack in which a non-root user can escalate a privilege level by using RoP technique on the network channel and control the management domain through which attacker can manage to control the other co-located VMs which they are not authorized to do so. Finally, a countermeasure solution has been proposed by directly modifying the open source code of cloud model that can inhibit all such attacks

    High-Performance Placement and Routing for the Nanometer Scale.

    Full text link
    Modern semiconductor manufacturing facilitates single-chip electronic systems that only five years ago required ten to twenty chips. Naturally, design complexity has grown within this period. In contrast to this growth, it is becoming common in the industry to limit design team size which places a heavier burden on design automation tools. Our work identifies new objectives, constraints and concerns in the physical design of systems-on-chip, and develops new computational techniques to address them. In addition to faster and more relevant design optimizations, we demonstrate that traditional design flows based on ``separation of concerns'' produce unnecessarily suboptimal layouts. We develop new integrated optimizations that streamline traditional chains of loosely-linked design tools. In particular, we bridge the gap between mixed-size placement and routing by updating the objective of global and detail placement to a more accurate estimate of routed wirelength. To this we add sophisticated whitespace allocation, and the combination provides increased routability, faster routing, shorter routed wirelength, and the best via counts of published techniques. To further improve post-routing design metrics, we present new global routing techniques based on Discrete Lagrange Multipliers (DLM) which produce the best routed wirelength results on recent benchmarks. Our work culminates in the integration of our routing techniques within an incremental placement flow to improve detailed routing solutions, shrink die sizes and reduce total chip cost. Not only do our techniques improve the quality and cost of designs, but also simplify design automation software implementation in many cases. Ultimately, we reduce the time needed for design closure through improved tool fidelity and the use of our incremental techniques for placement and routing.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64639/1/royj_1.pd

    Variational Curriculum Reinforcement Learning for Unsupervised Discovery of Skills

    Full text link
    Mutual information-based reinforcement learning (RL) has been proposed as a promising framework for retrieving complex skills autonomously without a task-oriented reward function through mutual information (MI) maximization or variational empowerment. However, learning complex skills is still challenging, due to the fact that the order of training skills can largely affect sample efficiency. Inspired by this, we recast variational empowerment as curriculum learning in goal-conditioned RL with an intrinsic reward function, which we name Variational Curriculum RL (VCRL). From this perspective, we propose a novel approach to unsupervised skill discovery based on information theory, called Value Uncertainty Variational Curriculum (VUVC). We prove that, under regularity conditions, VUVC accelerates the increase of entropy in the visited states compared to the uniform curriculum. We validate the effectiveness of our approach on complex navigation and robotic manipulation tasks in terms of sample efficiency and state coverage speed. We also demonstrate that the skills discovered by our method successfully complete a real-world robot navigation task in a zero-shot setup and that incorporating these skills with a global planner further increases the performance.Comment: ICML 2023. First two authors contributed equally. Code at https://github.com/seongun-kim/vcr

    Assessment of Carbon Emissions of Road Projects and Development of a Framework for Carbon Footprint Calculation of Roads in the City Of Abu Dhabi

    Get PDF
    Climate change has become a global issue affecting the environment and human health. Transportation is a major contributor of greenhouse gases (GHG) emissions, with road transport being responsible for more than half of these emissions. The main objective of this thesis was to estimate the carbon footprint associated with road projects in the city of Abu Dhabi following a comprehensive approach that considers all activities within the life cycle of roads. Three cases were considered including, Al Rahba City internal road network, the upgrading of Al Salam Street, and the widening of the Eastern Corniche Road. A carbon footprint estimation model (referred to as RoadCO₂) was developed to estimate GHG emissions of the three road cases. The model considers emissions from all phases of road projects and reports emissions in terms of carbon dioxide equivalent (CO₂eq). The methodology suggested by the Intergovernmental Panel on Climate Change (IPCC) was adopted in constructing the model. Results revealed that the total emissions from the construction of the investigated road cases are about 43, 292, and 16 thousand tons CO₂eq, respectively. Equipment used in construction contributed about 70%, 15% and 21% of the total emissions of the construction phase, respectively. The rest of the emissions during the construction phase originated from the use of construction materials and their associated transport. Upgrading of Al Salam Street project produced the highest emissions from construction materials due to the construction of a tunnel. Annual total emissions during the operation phase of Al Salam Street was estimated to be over 108 thousand tons CO₂eq/yr, whereas emissions during the operation phase for Al Rahba City internal roads were about 15 thousand tons CO₂eq/yr, and those for the Corniche Road were 91 thousand tons CO₂eq/yr. For the three cases, emissions were generated mainly during the operation phase (94% or more), with the main contributor being vehicle movement, followed to a lesser extent by street lighting
    corecore