227 research outputs found

    Study of TCP Issues over Wireless and Implementation of iSCSI over Wireless for Storage Area Networks

    Get PDF
    The Transmission Control Protocol (TCP) has proved to be proficient in classical wired networks, presenting an ability to acclimatize to modern, high-speed networks and present new scenarios for which it was not formerly designed. Wireless access to the Internet requires that information reliability be reserved while data is transmitted over the radio channel. Automatic repeat request (ARQ) schemes and TCP techniques are often used for error-control at the link layer and at the transport layer, respectively. TCP/IP is becoming a communication standard [1]. Initially it was designed to present reliable transmission over IP protocol operating principally in wired networks. Wireless networks are becoming more ubiquitous and we have witnessed an exceptional growth in heterogeneous networks. This report considers the problem of supporting TCP, the Internet data transport protocol, over a lossy wireless link whose features vary over time. Experimental results from a wireless test bed in a research laboratory are reported

    Learning network storage curriculum with experimental case based on embedded systems

    Get PDF
    In this paper, we present an experimental case for the course of Network Storage and Security, which benefited from an improved learning outcome for our students. The newly designed experiments-based contents are merged into the current course to help students obtain practical experiences about network storage. The experiments aim to build a network storage system based on available resources instead of any specialized network storage equipment. Technically, students can learn general practical knowledge of network storage on iSCSI (a network storage protocol based on IP technology) and also the technologies of embedded system. Through the experimental case, we found that it could fully enhance students\u27 comprehensive and practical abilities, develop students\u27 teamwork spirit and creativity, and especially improve the learning outcome of network storage curriculum. These learning and thinking methods can also be generalized and applied to other computer science related courses

    CloudJet4BigData: Streamlining Big Data via an Accelerated Socket Interface

    Get PDF
    Big data needs to feed users with fresh processing results and cloud platforms can be used to speed up big data applications. This paper describes a new data communication protocol (CloudJet) for long distance and large volume big data accessing operations to alleviate the large latencies encountered in sharing big data resources in the clouds. It encapsulates a dynamic multi-stream/multi-path engine at the socket level, which conforms to Portable Operating System Interface (POSIX) and thereby can accelerate any POSIX-compatible applications across IP based networks. It was demonstrated that CloudJet accelerates typical big data applications such as very large database (VLDB), data mining, media streaming and office applications by up to tenfold in real-world tests

    M2: Malleable Metal as a Service

    Full text link
    Existing bare-metal cloud services that provide users with physical nodes have a number of serious disadvantage over their virtual alternatives, including slow provisioning times, difficulty for users to release nodes and then reuse them to handle changes in demand, and poor tolerance to failures. We introduce M2, a bare-metal cloud service that uses network-mounted boot drives to overcome these disadvantages. We describe the architecture and implementation of M2 and compare its agility, scalability, and performance to existing systems. We show that M2 can reduce provisioning time by over 50% while offering richer functionality, and comparable run-time performance with respect to tools that provision images into local disks. M2 is open source and available at https://github.com/CCI-MOC/ims.Comment: IEEE International Conference on Cloud Engineering 201

    Cheetah: An Economical Distributed RAM Drive

    Get PDF
    Current hard drive technology shows a widening gap between the ability to store vast amounts of data and the ability to process. To overcome the problems of this secular trend, we explore the use of available distributed RAM resources to effectively replace a mechanical hard drive. The essential approach is a distributed Linux block device that spreads its blocks throughout spare RAM on a cluster and transfers blocks using network capacity. The presented solution is LAN-scalable, easy to deploy, and faster than a commodity hard drive. The specific driving problem is I/O intensive applications, particularly digital forensics. The prototype implementation is a Linux 2.4 kernel module, and connects to Unix based clients. It features an adaptive prefetching scheme that seizes future data blocks for each read request. We present experimental results based on generic benchmarks as well as digital forensic applications that demonstrate significant performance gains over commodity hard drives

    Fairness in a data center

    Get PDF
    Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack. This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation

    Peak Performance – Remote Memory Revisited

    Get PDF
    Many database systems share a need for large amounts of fast storage. However, economies of scale limit the utility of extending a single machine with an arbitrary amount of memory. The recent broad availability of the zero-copy data transfer protocol RDMA over low-latency and high throughput network connections such as InfiniBand prompts us to revisit the long-proposed usage of memory provided by remote machines. In this paper, we present a solution to make use of remote memory without manipulation of the operating system, and investigate the impact on database performance

    Evaluation of type-1 hypervisors on desktop-class virtualization hosts

    Get PDF
    System Virtualization has become a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems or workstations are often used as virtualization servers, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on commodity virtualization servers, aiming to assess the performance of a representative set of the type-1 platforms mostly in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, oVirt and Proxmox. Hypervisor performance is indirectly measured through synthetic benchmarks performed on Windows 10 LTSB and Linux Ubuntu Server 16.04 guests: PassMark for Windows, UnixBench for Linux, and the cross-platform Flexible I/O Tester and iPerf3 benchmarks. The evaluation results may be used to guide the choice of the best type-1 platform (performance-wise), depending on the predominant guest OS, the performance patterns (CPUbound, IO-bound, or balanced) of that OS, its storage type (local/remote) and the required network-level performance.info:eu-repo/semantics/publishedVersio

    Replication and Caching Systems for the support of VMs stored in File Systems with Snapshots

    Get PDF
    Recently, in a relatively short timeframe, there were fundamental changes in the way computing power is used. Virtualisation technology has changed both the model of a data centre’s infrastructure and the way physical computers are now managed. This shift is a consequence of today’s fast deployment rate of Virtual Machines (VM) in a high consolidation environment with minimal need for human management. New approaches to virtualisation techniques are being developed at a surprisingly fast rate, leading to a new exciting and vibrating ecosystem of platforms and services. We see the big industry players tackling problems such as Desktop Virtualisation with moderate success, but completely ignoring the computation power already present in their clients’ infrastructures and, instead, opting for a costly solution based on powerful new machines. There’s still room for improvement in Virtual Desktop Infrastructure (VDI) and development of new architectures that take advantage of the computation power available at the user’s desk, with a minimum effort on the management side; Infrastructure for Client-Based Desktops (iCBD) is one of these projects. This thesis focuses on the development of mechanisms for the replication and caching of VM images stored in a local filesystem, albeit one with the ability to perform snapshots. In this work, there are some challenges to address: the proposed architecture must be entirely distributed and completely integrated with the already existing client-based VDI platform; and it must be able to efficiently cope with very large, read-only files, (some of them snapshots) and handle their multiple versions. This work will also explore the challenges and advantages of deploying such a system in a high throughput network, with both high availability and scalability while efficiently supporting a large number of users (and their workstations)
    • …
    corecore