145 research outputs found

    Fairness in a data center

    Get PDF
    Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack. This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation

    Standards in the data storage industry : emergence, sustainability, and the battle for platform leadership

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2005.Includes bibliographical references (p. 127-130).In order to cope with the continuous increase of magnetically stored data and of mission critical dependence on this data, storage systems must similarly increase in their functionality offerings, and with that, their complexity. The efficient management of the heterogeneous and complex aggregation of these systems is becoming one of the major challenges to IT customers. At the same time, hardware is becoming commoditized, and the industry is looking towards software for additional revenue generation. This document examines proprietary as well as open-standards attempts at solving the interoperability problem. The first attempt was made by EMC when it developed WideSky, a middleware software layer that would be able to manage third party hardware. It is shown that the aim was to eventually transform this middleware into a de facto standard and with that establish platform leadership in the industry. The WideSky effort failed, and the analysis of this failure attributes it to a lack of industry support and inability at establishing a sustainable value chain. Meanwhile, the industry players rallied around the SNIA body and adopted the SMI specification (SMI-S) as a standard. SMI-S adoption is on the rise, but although it has the formal backing of most of the storage industry firms, it has not yet fulfilled its promise of enabling centralized management of heterogeneous systems. This is partially because of the fact that the functionality that it provides is still lagging behind the functionality that native APIs provide. Moreover, client adoption and the availability of client products that can be directly used by IT customers are still very limited.(cont.) However, an examination of the dynamics surrounding this standard show how SMI-S will benefit greatly from learning effects and network externalities as it continues to grow, and although lagging in traditional functionality, it offers an ancillary functionality of interoperability that is missing from current non- standardized software interfaces. The adoption tipping point is highly dependant on whether or not the value chain can be established before vendors start dropping support for the specification. It is proposed that a positive tipping of the market will make SMI-S a disruptive technology that has the potential of becoming the dominant design for storage management interfaces.by Jean-Claude Jacques Saghbini.S.M

    Assessing the Utility of a Personal Desktop Cluster

    Get PDF
    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an opensource operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated compute cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren — a shared datacenter resource that resides in a machine room. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a personal desktop cluster workstation — a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 “pizza box” workstation. In this paper, we present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop cluster that achieves 14 Gflops on Linpack but sips only 150-180 watts of power, resulting in a performance-power ratio that is over 300% better than our test SMP platform

    DIGITAL WATERMARKING FOR COMPACT DISCS AND THEIR EFFECT ON THE ERROR CORRECTION SYSTEM

    Get PDF
    A new technique, based on current compact disc technology, to image the transparent surface of a compact disc, or additionally the reflective information layer, has been designed, implemented and evaluated. This technique (image capture technique) has been tested and successfully applied to the detection of mechanically introduced compact disc watermarks and biometrical information with a resolution of 1.6um x l4um. Software has been written which, when used with the image capture technique, recognises a compact disc based on its error distribution. The software detects digital watermarks which cause either laser signal distortions or decoding error events. Watermarks serve as secure media identifiers. The complete channel coding of a Compact Disc Audio system including EFM modulation, error-correction and interleaving have been implemented in software. The performance of the error correction system of the compact disc has been assessed using this simulation model. An embedded data channel holding watermark data has been investigated. The covert channel is implemented by means of the error-correction ability of the Compact Disc system and was realised by aforementioned techniques like engraving the reflective layer or the polysubstrate layer. Computer simulations show that watermarking schemes, composed of regularly distributed single errors, impose a minimum effect on the error correction system. Error rates increase by a factor of ten if regular single-symbol errors per frame are introduced - all other patterns further increase the overall error rates. Results show that background signal noise has to be reduced by a factor of 60% to account for the additional burden of this optimal watermark pattern. Two decoding strategies, usually employed in modern CD decoders, have been examined. Simulations take emulated bursty background noise as it appears in user-handled discs into account. Variations in output error rates, depending on the decoder and the type of background noise became apparant. At low error rates {r < 0.003) the output symbol error rate for a bursty background differs by 20% depending on the decoder. Differences between a typical burst error distribution caused by user-handling and a non-burst error distribution has been found to be approximately 1% with the higher performing decoder. Simulation results show that the drop of the error-correction rates due to the presence of a watermark pattern quantitatively depends on the characteristic type of the background noise. A four times smaller change to the overall error rate was observed when adding a regular watermark pattern to a characteristic background noise, as caused by user-handling, compared to a non-bursty background

    Portable Computer Technology (PCT) Research and Development Program Phase 2

    Get PDF
    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions

    Formal methods for functional verification of cache-coherent systems-on-chip

    Get PDF
    State-of-the-art System-on-Chip (SoC) architectures integrate many different components, such as processors, accelerators, memories, and I/O blocks. Some of those components, but not all, may have caches. Because the effort of validation with simulation-based techniques, currently used in industry, grows exponentially with the complexity of the SoC, this thesis investigates the use of formal verification techniques in this context. More precisely, we use the CADP toolbox to develop and validate a generic formal model of a heterogeneous cache-coherent SoC compliant with the recent AMBA 4 ACE specification proposed by ARM. We use a constraint-oriented specification style to model the general requirements of the specification. We verify system properties on both the constrained and unconstrained model to detect the cache coherency corner cases. We take advantage of the parametrization of the proposed model to produce a comprehensive set of counterexamples of non-satisfied properties in the unconstrained model. The results of formal verification are then used to improve the industrial simulation-based verification techniques in two aspects. On the one hand, we suggest using the formal model to assess the sanity of an interface verification unit. On the other hand, in order to generate clever semi-directed test cases from temporal logic properties, we propose a two-step approach. One step consists in generating system-level abstract test cases using model-based testing tools of the CADP toolbox. The other step consists in refining those tests into interface-level concrete test cases that can be executed at RTL level with a commercial Coverage-Directed Test Generation tool. We found that our approach helps in the transition between interface-level and system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench.Les architectures des systèmes sur puce (System-on-Chip, SoC) actuelles intègrent de nombreux composants différents tels que les processeurs, les accélérateurs, les mémoires et les blocs d'entrée/sortie, certains pouvant contenir des caches. Vu que l'effort de validation basée sur la simulation, actuellement utilisée dans l'industrie, croît de façon exponentielle avec la complexité des SoCs, nous nous intéressons à des techniques de vérification formelle. Nous utilisons la boîte à outils CADP pour développer et valider un modèle formel d'un SoC générique conforme à la spécification AMBA 4 ACE récemment proposée par ARM dans le but de mettre en œuvre la cohérence de cache au niveau système. Nous utilisons une spécification orientée contraintes pour modéliser les exigences générales de cette spécification. Les propriétés du système sont vérifié à la fois sur le modèle avec contraintes et le modèle sans contraintes pour détecter les cas intéressants pour la cohérence de cache. La paramétrisation du modèle proposé a permis de produire l'ensemble complet des contre-exemples qui ne satisfont pas une certaine propriété dans le modèle non contraint. Notre approche améliore les techniques industrielles de vérification basées sur la simulation en deux aspects. D'une part, nous suggérons l'utilisation du modèle formel pour évaluer la bonne construction d'une unité de vérification d'interface. D'autre part, dans l'objectif de générer des cas de test semi-dirigés intelligents à partir des propriétés de logique temporelle, nous proposons une approche en deux étapes. La première étape consiste à générer des cas de tests abstraits au niveau système en utilisant des outils de test basé sur modèle de la boîte à outils CADP. La seconde étape consiste à affiner ces tests en cas de tests concrets au niveau de l'interface qui peuvent être exécutés en RTL grâce aux services d'un outil commercial de génération de tests dirigés par les mesures de couverture. Nous avons constaté que notre approche participe dans la transition entre la vérification du niveau interface, classiquement pratiquée dans l'industrie du matériel, et la vérification au niveau système. Notre approche facilite aussi la validation des propriétés globales du système, et permet une détection précoce des bugs, tant dans le SoC que dans les bancs de test commerciales

    Middeck Active Control Experiment (MACE), phase A

    Get PDF
    A rationale to determine which structural experiments are sufficient to verify the design of structures employing Controlled Structures Technology was derived. A survey of proposed NASA missions was undertaken to identify candidate test articles for use in the Middeck Active Control Experiment (MACE). The survey revealed that potential test articles could be classified into one of three roles: development, demonstration, and qualification, depending on the maturity of the technology and the mission the structure must fulfill. A set of criteria was derived that allowed determination of which role a potential test article must fulfill. A review of the capabilities and limitations of the STS middeck was conducted. A reference design for the MACE test article was presented. Computing requirements for running typical closed-loop controllers was determined, and various computer configurations were studied. The various components required to manufacture the structure were identified. A management plan was established for the remainder of the program experiment development, flight and ground systems development, and integration to the carrier. Procedures for configuration control, fiscal control, and safety, reliabilty, and quality assurance were developed

    Digital document imaging systems: An overview and guide

    Get PDF
    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs

    Hyperscsi : Design and development of a new protocol for storage networking

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore