76,519 research outputs found

    Design and implementation of an electro-optical backplane with pluggable in-plane connectors

    Get PDF
    The design, implementation and characterisation of an electro-optical backplane and an active pluggable in-plane optical connector technology is presented. The connection architecture adopted allows line cards to be mated to and unmated from a passive electro-optical backplane with embedded polymeric waveguides. The active connectors incorporate a photonics interface operating at 850 nm and a mechanism to passively align the interface to the optical waveguides embedded in the backplane. A demonstration platform has been constructed to assess the viability of embedded electro-optical backplane technology in dense data storage systems. The demonstration platform includes four switch cards, which connect both optically and electronically to the electro-optical backplane in a chassis. These switch cards are controlled by a single board computer across a Compact PCI bus on the backplane. The electrooptical backplane is comprised of copper layers for power and low speed bus communication and one polymeric optical layer, wherein waveguides have been patterned by a direct laser writing scheme. The optical waveguide design includes densely arrayed multimode waveguides with a centre to centre pitch of 250ÎŒm between adjacent channels, multiple cascaded waveguide bends, non-orthogonal crossovers and in-plane connector interfaces. In addition, a novel passive alignment method has been employed to simplify high precision assembly of the optical receptacles on the backplane. The in-plane connector interface is based on a two lens free space coupling solution, which reduces susceptibility to contamination. Successful transfer of 10.3 Gb/s data along multiple waveguides in the electro-optical backplane has been demonstrated and characterised

    nbodykit: an open-source, massively parallel toolkit for large-scale structure

    Get PDF
    We present nbodykit, an open-source, massively parallel Python toolkit for analyzing large-scale structure (LSS) data. Using Python bindings of the Message Passing Interface (MPI), we provide parallel implementations of many commonly used algorithms in LSS. nbodykit is both an interactive and scalable piece of scientific software, performing well in a supercomputing environment while still taking advantage of the interactive tools provided by the Python ecosystem. Existing functionality includes estimators of the power spectrum, 2 and 3-point correlation functions, a Friends-of-Friends grouping algorithm, mock catalog creation via the halo occupation distribution technique, and approximate N-body simulations via the FastPM scheme. The package also provides a set of distributed data containers, insulated from the algorithms themselves, that enable nbodykit to provide a unified treatment of both simulation and observational data sets. nbodykit can be easily deployed in a high performance computing environment, overcoming some of the traditional difficulties of using Python on supercomputers. We provide performance benchmarks illustrating the scalability of the software. The modular, component-based approach of nbodykit allows researchers to easily build complex applications using its tools. The package is extensively documented at http://nbodykit.readthedocs.io, which also includes an interactive set of example recipes for new users to explore. As open-source software, we hope nbodykit provides a common framework for the community to use and develop in confronting the analysis challenges of future LSS surveys.Comment: 18 pages, 7 figures. Feedback very welcome. Code available at https://github.com/bccp/nbodykit and for documentation, see http://nbodykit.readthedocs.i

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern

    CRAID: Online RAID upgrades using dynamic hot data reorganization

    Get PDF
    Current algorithms used to upgrade RAID arrays typically require large amounts of data to be migrated, even those that move only the minimum amount of data required to keep a balanced data load. This paper presents CRAID, a self-optimizing RAID array that performs an online block reorganization of frequently used, long-term accessed data in order to reduce this migration even further. To achieve this objective, CRAID tracks frequently used, long-term data blocks and copies them to a dedicated partition spread across all the disks in the array. When new disks are added, CRAID only needs to extend this process to the new devices to redistribute this partition, thus greatly reducing the overhead of the upgrade process. In addition, the reorganized access patterns within this partition improve the array’s performance, amortizing the copy overhead and allowing CRAID to offer a performance competitive with traditional RAIDs. We describe CRAID’s motivation and design and we evaluate it by replaying seven real-world workloads including a file server, a web server and a user share. Our experiments show that CRAID can successfully detect hot data variations and begin using new disks as soon as they are added to the array. Also, the usage of a dedicated partition improves the sequentiality of relevant data access, which amortizes the cost of reorganizations. Finally, we prove that a full-HDD CRAID array with a small distributed partition (<1.28% per disk) can compete in performance with an ideally restriped RAID-5 and a hybrid RAID-5 with a small SSD cache.Peer ReviewedPostprint (published version

    Performance Considerations for Gigabyte per Second Transcontinental Disk-to-Disk File Transfers

    Full text link
    Moving data from CERN to Pasadena at a gigabyte per second using the next generation Internet requires good networking and good disk IO. Ten Gbps Ethernet and OC192 links are in place, so now it is simply a matter of programming. This report describes our preliminary work and measurements in configuring the disk subsystem for this effort. Using 24 SATA disks at each endpoint we are able to locally read and write an NTFS volume is striped across 24 disks at 1.2 GBps. A 32-disk stripe delivers 1.7 GBps. Experiments on higher performance and higher-capacity systems deliver up to 3.5 GBps

    Efficient memory management in VOD disk array servers usingPer-Storage-Device buffering

    Get PDF
    We present a buffering technique that reduces video-on-demand server memory requirements in more than one order of magnitude. This technique, Per-Storage-Device Buffering (PSDB), is based on the allocation of a fixed number of buffers per storage device, as opposed to existing solutions based on per-stream buffering allocation. The combination of this technique with disk array servers is studied in detail, as well as the influence of Variable Bit Streams. We also present an interleaved data placement strategy, Constant Time Length Declustering, that results in optimal performance in the service of VBR streams. PSDB is evaluated by extensive simulation of a disk array server model that incorporates a simulation based admission test.This research was supported in part by the National R&D Program of Spain, Project Number TIC97-0438.Publicad

    Lemon: an MPI parallel I/O library for data encapsulation using LIME

    Full text link
    We introduce Lemon, an MPI parallel I/O library that is intended to allow for efficient parallel I/O of both binary and metadata on massively parallel architectures. Motivated by the demands of the Lattice Quantum Chromodynamics community, the data is stored in the SciDAC Lattice QCD Interchange Message Encapsulation format. This format allows for storing large blocks of binary data and corresponding metadata in the same file. Even if designed for LQCD needs, this format might be useful for any application with this type of data profile. The design, implementation and application of Lemon are described. We conclude with presenting the excellent scaling properties of Lemon on state of the art high performance computers
    • 

    corecore