972 research outputs found

    Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library

    Full text link
    Remote data access for data analysis in high performance computing is commonly done with specialized data access protocols and storage systems. These protocols are highly optimized for high throughput on very large datasets, multi-streams, high availability, low latency and efficient parallel I/O. The purpose of this paper is to describe how we have adapted a generic protocol, the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative for high performance I/O and data analysis applications in a global computing grid: the Worldwide LHC Computing Grid. In this work, we first analyze the design differences between the HTTP protocol and the most common high performance I/O protocols, pointing out the main performance weaknesses of HTTP. Then, we describe in detail how we solved these issues. Our solutions have been implemented in a toolkit called davix, available through several recent Linux distributions. Finally, we describe the results of our benchmarks where we compare the performance of davix against a HPC specific protocol for a data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho

    ENABLING SMART CITY SERVICES FOR HETEROGENEOUS WIRELESS NETWORKS

    Get PDF
    A city can be transformed into a smart city if there is a resource-rich and reliable communication infrastructure available. A smart city in effect improves the quality of life of citizens by providing the means to convert the existing solutions to smart ones. Thus, there is a need for finding a suitable network structure that is capable of providing sufficient capacity and satisfactory quality-of-service in terms of latency and reliability. In this thesis, we propose a wireless network structure for smart cities. Our proposed network provides two wireless interfaces for each smart city node. One is supposed to connect to a public WiFi network, while the other is connected to a cellular network (such as LTE). Indeed, Multi-homing helps different applications to use the two interfaces simultaneously as well as providing the necessary redundancy in case the connection of one interface is lost. The performance of our proposed network structure is investigated using comprehensive ns-2 computer simulations. In this study, high data rate real-time and low data rate non-real-time applications are considered. The effect of a wide range of network parameters is tested such as the WiFi transmission rate, LTE transmission rate, the number of real-time and non-real-time nodes, application traffic rate, and different wireless propagation models. We focus on critical quality-of-service (QoS) parameters such as packet delivery delay and packet loss. We also measured the energy consumed in packet transmission. Compared with a single-interface WiFi-based or an LTE-based network, our simulation results show the superiority of the proposed network structure in satisfying QoS with lower latency and lower packet loss. We found also that the proposed multihoming structure enables the smart city sensors and other applications to realize a greener communication by consuming a lesser amount of transmission power rather than single interface-based networks

    Glowbal IP: An Adaptive and Transparent IPv6 Integration in the Internet of Things

    Get PDF

    Resilient network design: Challenges and future directions

    Get PDF
    This paper highlights the complexity and challenges of providing reliable services in the evolving communications infrastructure. The hurdles in providing end-to-end availability guarantees are discussed and research problems identified. Avenues for overcoming some of the challenges examined are presented. This includes the use of a highly available network spine embedded in a physical network together with efficient crosslayer mapping to offer survivability and differentiation of traffic into classes of resilience. © 2013 Springer Science+Business Media New York

    Gateway selection in multi-hop wireless networks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 61-63).This thesis describes the implementation of MultiNAT, an application that attempts to provide the benefits of client multi-homing while requiring minimal client configuration, and the evaluation of a novel link-selection algorithm, called AvMet, that significantly outperforms earlier multi-homing methods. The main motivation behind MultiNAT is the growing popularity of cheap broadband Internet connections, which are still not reliable enough for important applications. The increasing prevalence of wireless networks, with their attendant unpredictability and high rates of loss, is further exacerbating the situation. Recent work has shown that multi-homing can increase both Internet performance as well as the end-to-end availability of Internet services. Most previous solutions have required complicated client configuration or have routed packets through dedicated overlay networks; MultiNAT attempts to provide a simpler solution. MultiNAT automatically forwards connection attempts over all local interfaces and uses the resulting connection establishment times along with link-selection metrics to select which interface to use.(cont.) MultiNAT is able to sustain transfer speeds in excess of 4 megabytes per second, while imposing only an extra 150 microseconds of latency per packet. MultiNAT supports a variety of link-selection metrics, each with its own strengths and weaknesses. The MONET race-based scheme works well in wired networks, but is misled by the unpredictable nature of wireless losses. The ETT metric performs relatively well at finding high-throughput paths in multi-hop wireless networks, but can be incorrect when faced with heavy load. Unfortunately, neither of these metrics address end-to-end performance when packets traverse both wired and wireless networks. To fill this need, we propose AvMet, a link-selection scheme that tracks past connection history in order to improve current predictions. We evaluate AvMet on a variety of network configurations and find that AvMet is not misled by wireless losses. AvMet is able to outperform existing predictors in all network configurations and can imp:rove end-to-end availability by up to half an order of magnitude.by Rohit Navalgund Rao.M.Eng

    PERFORMANCE STUDY FOR CAPILLARY MACHINE-TO-MACHINE NETWORKS

    Get PDF
    Communication technologies witness a wide and rapid pervasiveness of wireless machine-to-machine (M2M) communications. It is emerging to apply for data transfer among devices without human intervention. Capillary M2M networks represent a candidate for providing reliable M2M connectivity. In this thesis, we propose a wireless network architecture that aims at supporting a wide range of M2M applications (either real-time or non-real-time) with an acceptable QoS level. The architecture uses capillary gateways to reduce the number of devices communicating directly with a cellular network such as LTE. Moreover, the proposed architecture reduces the traffic load on the cellular network by providing capillary gateways with dual wireless interfaces. One interface is connected to the cellular network, whereas the other is proposed to communicate to the intended destination via a WiFi-based mesh backbone for cost-effectiveness. We study the performance of our proposed architecture with the aid of the ns-2 simulator. An M2M capillary network is simulated in different scenarios by varying multiple factors that affect the system performance. The simulation results measure average packet delay and packet loss to evaluate the quality-of-service (QoS) of the proposed architecture. Our results reveal that the proposed architecture can satisfy the required level of QoS with low traffic load on the cellular network. It also outperforms a cellular-based capillary M2M network and WiFi-based capillary M2M network. This implies a low cost of operation for the service provider while meeting a high-bandwidth service level agreement. In addition, we investigate how the proposed architecture behaves with different factors like the number of capillary gateways, different application traffic rates, the number of backbone routers with different routing protocols, the number of destination servers, and the data rates provided by the LTE and Wi-Fi technologies. Furthermore, the simulation results show that the proposed architecture continues to be reliable in terms of packet delay and packet loss even under a large number of nodes and high application traffic rates

    White Paper for Research Beyond 5G

    Get PDF
    The documents considers both research in the scope of evolutions of the 5G systems (for the period around 2025) and some alternative/longer term views (with later outcomes, or leading to substantial different design choices). This document reflects on four main system areas: fundamental theory and technology, radio and spectrum management; system design; and alternative concepts. The result of this exercise can be broken in two different strands: one focused in the evolution of technologies that are already ongoing development for 5G systems, but that will remain research areas in the future (with “more challenging” requirements and specifications); the other, highlighting technologies that are not really considered for deployment today, or that will be essential for addressing problems that are currently non-existing, but will become apparent when 5G systems begin their widespread deployment

    LTE Optimization and Resource Management in Wireless Heterogeneous Networks

    Get PDF
    Mobile communication technology is evolving with a great pace. The development of the Long Term Evolution (LTE) mobile system by 3GPP is one of the milestones in this direction. This work highlights a few areas in the LTE radio access network where the proposed innovative mechanisms can substantially improve overall LTE system performance. In order to further extend the capacity of LTE networks, an integration with the non-3GPP networks (e.g., WLAN, WiMAX etc.) is also proposed in this work. Moreover, it is discussed how bandwidth resources should be managed in such heterogeneous networks. The work has purposed a comprehensive system architecture as an overlay of the 3GPP defined SAE architecture, effective resource management mechanisms as well as a Linear Programming based analytical solution for the optimal network resource allocation problem. In addition, alternative computationally efficient heuristic based algorithms have also been designed to achieve near-optimal performance

    Managed access dependability for critical services in wireless inter domain environment

    Get PDF
    The Information and Communications Technology (ICT) industry has through the last decades changed and still continues to affect the way people interact with each other and how they access and share information, services and applications in a global market characterized by constant change and evolution. For a networked and highly dynamic society, with consumers and market actors providing infrastructure, networks, services and applications, the mutual dependencies of failure free operations are getting more and more complex. Service Level Agreements (SLAs) between the various actors and users may be used to describe the offerings along with price schemes and promises regarding the delivered quality. However, there is no guarantee for failure free operations whatever efforts and means deployed. A system fails for a number of reasons, but automatic fault handling mechanisms and operational procedures may be used to decrease the probability for service interruptions. The global number of mobile broadband Internet subscriptions surpassed the number of broadband subscriptions over fixed technologies in 2010. The User Equipment (UE) has become a powerful device supporting a number of wireless access technologies and the always best connected opportunities have become a reality. Some services, e.g. health care, smart power grid control, surveillance/monitoring etc. called critical services in this thesis, put high requirements on service dependability. A definition of dependability is the ability to deliver services that can justifiably be trusted. For critical services, the access networks become crucial factors for achieving high dependability. A major challenge in a multi operator, multi technology wireless environment is the mobility of the user that necessitates handovers according to the physical movement. In this thesis it is proposed an approach for how to optimize the dependability for critical services in multi operator, multi technology wireless environment. This approach allows predicting the service availability and continuity at real-time. Predictions of the optimal service availability and continuity are considered crucial for critical services. To increase the dependability for critical services dual homing is proposed where the use of combinations of access points, possibly owned by different operators and using different technologies, are optimized for the specific location and movement of the user. A central part of the thesis is how to ensure the disjointedness of physical and logical resources so important for utilizing the dependability increase potential with dual homing. To address the interdependency issues between physical and logical resources, a study of Operations, Administrations, and Maintenance (OA&M) processes related to the access network of a commercial Global System for Mobile Communications (GSM)/Universal Mobile Telecommunications System (UMTS) operator was performed. The insight obtained by the study provided valuable information of the inter woven dependencies between different actors in the delivery chain of services. Based on the insight gained from the study of OA&M processes a technological neutral information model of physical and logical resources in the access networks is proposed. The model is used for service availability and continuity prediction and to unveil interdependencies between resources for the infrastructure. The model is proposed as an extension of the Media Independent Handover (MIH) framework. A field trial in a commercial network was conducted to verify the feasibility in retrieving the model related information from the operators' Operational Support Systems (OSSs) and to emulate the extension and usage of the MIH framework. In the thesis it is proposed how measurement reports from UE and signaling in networks are used to define virtual cells as part of the proposed extension of the MIH framework. Virtual cells are limited geographical areas where the radio conditions are homogeneous. Virtual cells have radio coverage from a number of access points. A Markovian model is proposed for prediction of the service continuity of a dual homed critical service, where both the infrastructure and radio links are considered. A dependability gain is obtained by choosing a global optimal sequence of access points. Great emphasizes have been on developing computational e cient techniques and near-optimal solutions considered important for being able to predict service continuity at real-time for critical services. The proposed techniques to obtain the global optimal sequence of access points may be used by handover and multi homing mechanisms/protocols for timely handover decisions and access point selections. With the proposed extension of the MIH framework a global optimal sequence of access points providing the highest reliability may be predicted at real-time
    corecore