11,455 research outputs found

    Soundings: the Newsletter of the Monterey Bay Chapter of the American Cetacean Society. 2006

    Get PDF
    (PDF contains 88 pages.

    Soundings: the Newsletter of the Monterey Bay Chapter of the American Cetacean Society. 2004

    Get PDF
    (PDF contains 92 pages.

    Hardware-Assisted Dependable Systems

    Get PDF
    Unpredictable hardware faults and software bugs lead to application crashes, incorrect computations, unavailability of internet services, data losses, malfunctioning components, and consequently financial losses or even death of people. In particular, faults in microprocessors (CPUs) and memory corruption bugs are among the major unresolved issues of today. CPU faults may result in benign crashes and, more problematically, in silent data corruptions that can lead to catastrophic consequences, silently propagating from component to component and finally shutting down the whole system. Similarly, memory corruption bugs (memory-safety vulnerabilities) may result in a benign application crash but may also be exploited by a malicious hacker to gain control over the system or leak confidential data. Both these classes of errors are notoriously hard to detect and tolerate. Usual mitigation strategy is to apply ad-hoc local patches: checksums to protect specific computations against hardware faults and bug fixes to protect programs against known vulnerabilities. This strategy is unsatisfactory since it is prone to errors, requires significant manual effort, and protects only against anticipated faults. On the other extreme, Byzantine Fault Tolerance solutions defend against all kinds of hardware and software errors, but are inadequately expensive in terms of resources and performance overhead. In this thesis, we examine and propose five techniques to protect against hardware CPU faults and software memory-corruption bugs. All these techniques are hardware-assisted: they use recent advancements in CPU designs and modern CPU extensions. Three of these techniques target hardware CPU faults and rely on specific CPU features: ∆-encoding efficiently utilizes instruction-level parallelism of modern CPUs, Elzar re-purposes Intel AVX extensions, and HAFT builds on Intel TSX instructions. The rest two target software bugs: SGXBounds detects vulnerabilities inside Intel SGX enclaves, and “MPX Explained” analyzes the recent Intel MPX extension to protect against buffer overflow bugs. Our techniques achieve three goals: transparency, practicality, and efficiency. All our systems are implemented as compiler passes which transparently harden unmodified applications against hardware faults and software bugs. They are practical since they rely on commodity CPUs and require no specialized hardware or operating system support. Finally, they are efficient because they use hardware assistance in the form of CPU extensions to lower performance overhead

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Spartan Daily, March 14, 1978

    Get PDF
    Volume 70, Issue 29https://scholarworks.sjsu.edu/spartandaily/6320/thumbnail.jp

    Development of a Coupled Neutronics/Thermal-Hydraulics/Fuel Thermo-Mechanics Multiphysics Tool for Best-Estimate PWR Core Simulations

    Get PDF
    Eine detaillierte Analyse des Reaktorkernverhaltens muss die gegenseitige Wechselwirkung von neutronischen, thermohydraulischen und thermomechanischen EIgenschaften des Kerns berĂŒcksichtigen. In den letzten zehn Jahren haben sich neutronisch/thermohydraulisch gekoppelte Simulationen zu einem Standard fĂŒr die Berechnung des Betriebsverhaltens von Reaktorkernen weiterentwickelt. Der Einfluss des thermomechanischen Brennstoffverhaltens auf die Ergebnisse der Reaktorkernsimulationen ist jedoch bisher noch nicht gut untersucht worden und wird erst seit einigen Jahren analysiert. Die LeitfĂ€higkeit des Spalts zwischen Brennstoff und HĂŒllrohr und die WĂ€rmeleitfĂ€higkeit des Brennstoffs können nur durch einen thermomechanischen Code genau modelliert werden. Diese GrĂ¶ĂŸen variieren in einem großen Bereich wĂ€hrend der Lebensdauer der BrennstĂ€be im Reaktor. DarĂŒber hinaus beeinflussen diese Eigenschaften direkt die Berechnung der Brennstoff- und KĂŒhlmitteltemperatur, und daher ist ihre korrekte Vorhersage in einer Best-Estimate-Simulation von Bedeutung. Diese Doktorarbeit beschreibt die Entwicklung eines multiphysikalischen Werkzeugs, das einen Neutronen-, einen Thermohydraulik- und einen Brennstoff-Thermomechanik-Code koppelt. Die FĂ€higkeit dieses Werkzeugs, die bestrahlungsabhĂ€ngigen thermomechanischen Eigenschaften zu modellieren, ermöglicht eine genauere Beschreibung der Betriebseigenschaften eines Reaktorkerns. Die Verifikations- und Validierungsarbeiten, die durchgefĂŒhrt wurden, um die erhöhte Vorhersagegenauigkeit und Leistung des neuen Multiphysik-Tools zu demonstrieren, werden vorgestellt.Insbesondere wird die Analyse von ReaktivitĂ€tsstörfallen (RIA) fĂŒr einen Vollkern-DWR diskutiert. Bei diesem Auslegungsstörfall mĂŒssen Sicherheitskriterien wie z. B. zusĂ€tzliche Enthalpie nachgewiesen werden, um die gesetzlichen Anforderungen zu erfĂŒllen. Diese Untersuchung hat im Vergleich zu den Ergebnissen traditioneller Methoden einen signifikanten Einfluss auf die Vorhersage sicherheitsrelevanter Parameter mit dem neuen multiphysikalischen Werkzeug gezeigt. Sie zeigt auch die Bedeutung der BerĂŒck-sichtigung der Thermomechanik des Brennstoffs bei Best-Estimate-Simulationen. Große lokale Temperaturabweichungen in den Brennstoffzentraltemperaturen in einer Simulation bei Volllast und ein signifikanter Anstieg in der Vorhersage der Leistungsspitze in einer heißen Nullleistungs-RIA-Transiente sowie ein Anstieg in der vorhergesagten Enthalpie des zugesetzten Brennstoffs werden bei Verwendung des neu entwickelten gekoppelten Codes PARCS-SUBCHANFLOW-TRANSURANUS gefunden. Neben dem Hauptthema der Arbeit wurde eine Methodik zur Vorhersage von thermo-hydraulischen lokalen Sicherheitsparametern unter Ausnutzung der Neutronik/Unterkanal-Thermohydraulik-Kopplung implementiert. Die Implementierung wurde mit einer Monte-Carlo/Unterkanal-Lösung höherer Ordnung verglichen, die einen Unterschied von weniger als 2% bei der Leistungsvorhersage in den meisten Bereichen zeigte. Außerdem ist die erforderliche Berechnungszeit bei Ă€hnlicher Genauigkeit um GrĂ¶ĂŸenordnungen kleiner. Diese FĂ€higkeit hat das Potenzial, in Zukunft durch HinzufĂŒgen einer gekoppelten thermo-mechanischen Analyse auch auf Subkanalebene erweitert zu werden. Die durchgefĂŒhrten Analysen, die durch die gekoppelte Simulation von Neutronik, Thermo-hydraulik und Brennnadelmechanik ermöglicht wurden, haben gezeigt, wie wichtig ein solcher ganzheitlicher Ansatz ist. Er ermöglicht die BerĂŒcksichtigung der bestrahlungs-abhĂ€ngigen thermo-mechanischen Parameter in der Kernsimulation. Das neu entwickelte Werkzeug, das gekoppelte PARCS-SUBCHANFLOW-TRANSURANUS Simulationen durchfĂŒhrt, ebnet den Weg fĂŒr die zukĂŒnftige Best-Estimate-Analyse von Kernreaktorkernen

    James Wilson: Presbyterian, Anglican, Thomist, or Deist?: Does it Matter? (Chapter 7 of The Founders on God and Government)

    Full text link
    Excerpt: James Wilson is buried in America\u27s Westminster Abby-Christ Church, Philadelphia. This Anglican church is only blocks away from the First Presbyterian church in Philadelphia, where Wilson rented a pew until the end of his life. Some scholars report that Wilson joined the Anglican Communion in 1778, perhaps at the behest of one his best friends, William White, the first Anglican bishop of Philadelphia. Others claim he that never abandoned the Presbyterianism of his native Scotland. Still others pay no attention to his denominational commitments, arguing that he was actually a Thornist or a deist. Finally, some scholars say nothing about his religious identification or beliefs, apparently concluding that these things are unrelated to his political and legal accomplishments

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    Designing Low Cost Error Correction Schemes for Improving Memory Reliability

    Get PDF
    abstract: Memory systems are becoming increasingly error-prone, and thus guaranteeing their reliability is a major challenge. In this dissertation, new techniques to improve the reliability of both 2D and 3D dynamic random access memory (DRAM) systems are presented. The proposed schemes have higher reliability than current systems but with lower power, better performance and lower hardware cost. First, a low overhead solution that improves the reliability of commodity DRAM systems with no change in the existing memory architecture is presented. Specifically, five erasure and error correction (E-ECC) schemes are proposed that provide at least Chipkill-Correct protection for x4 (Schemes 1, 2 and 3), x8 (Scheme 4) and x16 (Scheme 5) DRAM systems. All schemes have superior error correction performance due to the use of strong symbol-based codes. In addition, the use of erasure codes extends the lifetime of the 2D DRAM systems. Next, two error correction schemes are presented for 3D DRAM memory systems. The first scheme is a rate-adaptive, two-tiered error correction scheme (RATT-ECC) that provides strong reliability (10^10x) reduction in raw FIT rate) for an HBM-like 3D DRAM system that services CPU applications. The rate-adaptive feature of RATT-ECC enables permanent bank failures to be handled through sparing. It can also be used to significantly reduce the refresh power consumption without decreasing the reliability and timing performance. The second scheme is a two-tiered error correction scheme (Config-ECC) that supports different sized accesses in GPU applications with strong reliability. It addresses the mismatch between data access size and fixed sized ECC scheme by designing a product code based flexible scheme. Config-ECC is built around a core unit designed for 32B access with a simple extension to support 64B and 128B accesses. Compared to fixed 32B and 64B ECC schemes, Config-ECC reduces the failure in time (FIT) rate by 200x and 20x, respectively. It also reduces the memory energy by 17% (in the dynamic mode) and 21% (in the static mode) compared to a state-of-the-art fixed 64B ECC scheme.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    The Journeyers

    Get PDF
    On May 3, 1932, Minnie Zenkel’s Original Yiddish Puppet Theater, located in the heart of the Lower East Side’s “Yiddish Rialto,” burns down under mysterious circumstances. The police suspect arson but there are no persons of interest, and the theater’s namesake, a twenty-year old female puppeteer, disappears just after the fire; some believe she has stolen the theater’s original scripts in an act of revenge. Eighty years later the successor puppet theater once again finds itself without a home, when it receives word that developers want to raze the theater, now in Tribeca, and construct a forty-foot hotel. In the context of this backdrop we meet Jorie Goldman, who has been laid off from her long-time associate position at a prestigious law firm and finds temporary employment with a land-use lawyer hired to stop the developers. Like the neighborhood she is fighting to save, Jorie struggles with her own questions of identity. Bisexual and single, Jorie hasn’t yet fully “come of age,” in part because of her tumultuous childhood; at age thirteen, Jorie’s younger sister died of leukemia, prompting her parents’ divorce. In attempting to save the puppet theater from destruction, Jorie will be forced to confront her past and the related fears that prevent her from finding success in her career and a lasting love. Centered on the changing physical landscape of New York City and incorporating elements of puppetry, Broadway, Yiddish, and law, this is ultimately a journey of self-discovery. On this journey, Jorie will meet a cast of characters related to the future of 31 Desbrosses Street who also wrestle with self-identity. Susan Fiske, a Korean adoptee raised by white Connecticuters, is the chairwoman of the zoning board that has the ultimate say over the fate of the building, yet she also has an undisclosed personal interest in the outcome of the case. Biz Colton, the current owner of the building and a famous Broadway actor, bumps up against ghosts from his own past when he decides whether to sell his interest in the property. Finally, Jorie finds a love interest in Ella Leider, an academic and member of the puppet theater, who is searching for Minnie Zenkel’s lost scripts
    • 

    corecore