140 research outputs found

    Securing Internet Protocol (IP) Storage: A Case Study

    Full text link
    Storage networking technology has enjoyed strong growth in recent years, but security concerns and threats facing networked data have grown equally fast. Today, there are many potential threats that are targeted at storage networks, including data modification, destruction and theft, DoS attacks, malware, hardware theft and unauthorized access, among others. In order for a Storage Area Network (SAN) to be secure, each of these threats must be individually addressed. In this paper, we present a comparative study by implementing different security methods in IP Storage network.Comment: 10 Pages, IJNGN Journa

    CloudJet4BigData: Streamlining Big Data via an Accelerated Socket Interface

    Get PDF
    Big data needs to feed users with fresh processing results and cloud platforms can be used to speed up big data applications. This paper describes a new data communication protocol (CloudJet) for long distance and large volume big data accessing operations to alleviate the large latencies encountered in sharing big data resources in the clouds. It encapsulates a dynamic multi-stream/multi-path engine at the socket level, which conforms to Portable Operating System Interface (POSIX) and thereby can accelerate any POSIX-compatible applications across IP based networks. It was demonstrated that CloudJet accelerates typical big data applications such as very large database (VLDB), data mining, media streaming and office applications by up to tenfold in real-world tests

    Benchmarking of bare metal virtualization platforms on commodity hardware

    Get PDF
    In recent years, System Virtualization became a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems are often used as virtualization “servers”, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on such systems, about the performance of Windows 10 and Ubuntu Server 16.04 guests, when deployed in what we believe are the type-1 platforms most in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, and KVM-based (represented by oVirt and Proxmox). Performance is measured using three synthetic benchmarks: PassMark for Windows, UnixBench for Ubuntu Server, and the cross-platform Flexible I/O Tester. The benchmarks results may be used to choose the most adequate type-1 platform (performance-wise), depending on guest OS, its performance requisites (CPU-bound, IO-bound, or balanced) and its storage type (local/remote) used.info:eu-repo/semantics/publishedVersio

    Evaluation of type-1 hypervisors on desktop-class virtualization hosts

    Get PDF
    System Virtualization has become a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems or workstations are often used as virtualization servers, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on commodity virtualization servers, aiming to assess the performance of a representative set of the type-1 platforms mostly in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, oVirt and Proxmox. Hypervisor performance is indirectly measured through synthetic benchmarks performed on Windows 10 LTSB and Linux Ubuntu Server 16.04 guests: PassMark for Windows, UnixBench for Linux, and the cross-platform Flexible I/O Tester and iPerf3 benchmarks. The evaluation results may be used to guide the choice of the best type-1 platform (performance-wise), depending on the predominant guest OS, the performance patterns (CPUbound, IO-bound, or balanced) of that OS, its storage type (local/remote) and the required network-level performance.info:eu-repo/semantics/publishedVersio

    Cheetah: An Economical Distributed RAM Drive

    Get PDF
    Current hard drive technology shows a widening gap between the ability to store vast amounts of data and the ability to process. To overcome the problems of this secular trend, we explore the use of available distributed RAM resources to effectively replace a mechanical hard drive. The essential approach is a distributed Linux block device that spreads its blocks throughout spare RAM on a cluster and transfers blocks using network capacity. The presented solution is LAN-scalable, easy to deploy, and faster than a commodity hard drive. The specific driving problem is I/O intensive applications, particularly digital forensics. The prototype implementation is a Linux 2.4 kernel module, and connects to Unix based clients. It features an adaptive prefetching scheme that seizes future data blocks for each read request. We present experimental results based on generic benchmarks as well as digital forensic applications that demonstrate significant performance gains over commodity hard drives

    Hyperscsi : Design and development of a new protocol for storage networking

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    SDN Enabled Network Efficient Data Regeneration for Distributed Storage Systems

    Get PDF
    Distributed Storage Systems (DSSs) have seen increasing levels of deployment in data centers and in cloud storage networks. DSS provides efficient and cost-effective ways to store large amount of data. To ensure reliability and resilience to failures, DSS employ mirroring and coding schemes at the block and file level. While mirroring techniques provide an efficient way to recover lost data, they do not utilize disk space efficiently, resulting in large overheads in terms of data storage. Coding techniques on the other hand provide a better way to recover data as they reduce the amount of storage space required for data recovery purposes. However, the current recovery process for coded data is not efficient due to the need to transfer large amounts of data to regenerate the data lost as a result of a failure. This results in significant delays and excessive network traffic resulting in a major performance bottleneck. In this thesis, we propose a new architecture for efficient data regeneration in distribution storage systems. A key idea of our architecture is to enable network switches to perform network coding operations, i.e., combine packets they receive over incoming links and forward the resulting packet towards the destination and do this in a principled manner. Another key element of our framework is a transport-layer reverse multicast protocol that takes advantage of network coding to minimize the rebuild time required to transmit the data by allowing more efficient utilization of network bandwidth. The new architecture is supported using the principles of Software Defined Networking (SDN) and making extensions where required in a principled manner. To enable the switches to perform network coding operations, we propose an extension of packet processing pipeline in the dataplane of a software switch. Our testbed experiments show that the proposed architecture results in modest performance gains

    Fairness in a data center

    Get PDF
    Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack. This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation

    Performance analysis of an iSCSI block device in virtualized environment

    Get PDF
    Virtualization is new to telecom but it has been already implemented in IT sectors. Thus its benefits are already proven, which drags other sectors attention towards it. Now the telecom organizations are also focusing on virtualization to reap the full benefits of it. The main focus of this thesis is to conduct a performance analysis of a block storage device in a virtualization environment. Storage performance plays vital role in telecom sector. The performance and the reliability of the storage device is more important factor to fulfill the client request with minimum latency. This thesis is comprised of three main areas. The first literature part is to study the different storage networking possibilities and the different storage protocol practice to establish communication between server and the storage in the storage area network. The study indicated that Internet Small Computer System Interface (iSCSI) has more advantages than other practices in the storage area network. The second part covers the design of storage area network (SAN) solution. The storage is offered by an iSCSI storage server. It offers a block level storage device access to the compute server. Different iSCSI targets are available in market, performance of those were compared. Linux-IO Target was concluded as better iSCSI target with better performance and reliability. The Storage server was implemented as a virtual machine for better resource utilization, thus there was a study about the hypervisor and the different networking options for the virtual machines were compared. The final part is to optimize the SAN solution. Multipathing, different caching options and different driver options provided by the kernel virtual machine (KVM)/ Quick emulators (QEMU) were considered for optimization

    U-LITE: 6 years of scientific computing at LNGS

    Get PDF
    The computing infrastructure of Laboratori Nazionali del Gran Sasso (LNGS) is the primary platform for data storage, analysis, computing and simulation of the LNGS-based experiments, which are part of the research activities of the Istituto Nazionale di Fisica Nucleare (INFN). Groups running such experiments have diverse needs, and adopt different approaches in developing the computing frameworks that support their activities. Since the emergence of the Cloud paradigm, the Computing and Network Service has built on its experience in operating and managing the LNGS computing infrastructure to develop U-LITE, a versatile environment apt at hosting such varied ecosystem and providing LNGS scientific users a familiar computing interface which hides all the complexities of a modern data center management. Over the last 6 years U-LITE has proved as a valuable tool for the LNGS experiments, and provides an example of effective use of the Cloud computing approach in a real scientific context
    corecore