92 research outputs found

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    NASA Space Engineering Research Center Symposium on VLSI Design

    Get PDF
    The NASA Space Engineering Research Center (SERC) is proud to offer, at its second symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories and the electronics industry. These featured speakers share insights into next generation advances that will serve as a basis for future VLSI design. Questions of reliability in the space environment along with new directions in CAD and design are addressed by the featured speakers

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Idea into Practice: How Well Does U.S. Patent Law Implement Modern Innovation Theory, 12 J. Marshall Rev. Intell. Prop. L. 644 (2013)

    Get PDF
    The U.S. Supreme Court’s decision in Graham v. John Deere (1966) placed neoclassical economic insights at the heart of modern patent law. But economic theory has moved on. Since the 1990s, legal scholars have repeatedly mined the discipline to propose ad hoc rules for individual industries, such as biotech and software. So far, however, they have almost always ignored the literature’s broader lessons for doctrine. This article asks how well today’s patent doctrine follows and occasionally departs from modern economic principles. The analysis begins by reviewing what neoclassical economists have learned about innovation since the 1970s. Legal scholars usually divide this literature into a half-dozen competing and distinct “theories.” Naively, this seems to suggest that any patent doctrines based on these theories must be similarly fragmented. This article offers a way out: far from being in conflict, the putatively separate “theories” share so many common assumptions and mathematical methods that they can usefully be analyzed as special cases of a single underlying theory. Furthermore, much of this theory is known. In particular, it predicts that any economically efficient patent system must accomplish three tasks: (1) limiting reward to non-obvious inventions; (2) choosing patent breadth to balance the benefits of innovation against the costs of monopoly; and (3) prescribing rules for allocating patent rewards where multiple inventors contribute to a shared technology. Remarkably, patent doctrine uses Graham’s PHOSITA concept to address all three principles. This means that doctrinal solutions for one principle can have unintended impacts on the others. This article shows that any doctrinal architecture built on Graham’s PHOSITA test automatically allocates reward among successive inventors. Though reasonable, these default outcomes fall short of the economic ideal. This article analyzes how changes in the Utility, Blocking Patents, Reverse Doctrine of Equivalents, and the Written Description doctrines can mitigate this problem. However, other gaps are inherent and cannot be eliminated without abandoning Graham itself. This radically revised architecture would probably cause more problems than it solves

    Technological development, strategic behavior and government policy in information technology industries

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Political Science, 1989.Includes bibliographical references.by Charles H. Ferguson.Ph.D

    Collaborative, Trust-Based Security Mechanisms for a National Utility Intranet

    Get PDF
    This thesis investigates security mechanisms for utility control and protection networks using IP-based protocol interaction. It proposes flexible, cost-effective solutions in strategic locations to protect transitioning legacy and full IP-standards architectures. It also demonstrates how operational signatures can be defined to enact organizationally-unique standard operating procedures for zero failure in environments with varying levels of uncertainty and trust. The research evaluates layering encryption, authentication, traffic filtering, content checks, and event correlation mechanisms over time-critical primary and backup control/protection signaling to prevent disruption by internal and external malicious activity or errors. Finally, it shows how a regional/national implementation can protect private communities of interest and foster a mix of both centralized and distributed emergency prediction, mitigation, detection, and response with secure, automatic peer-to-peer notifications that share situational awareness across control, transmission, and reliability boundaries and prevent wide-spread, catastrophic power outages

    Fintech and the Innovation Trilemma

    Get PDF
    Whether in response to roboadvising, artificial intelligence, or crypto-currencies like Bitcoin, regulators around the world have made it a top policy priority to supervise the exponential growth of financial technology (or fintech ) in the post-Crisis era. However, applying traditional regulatory strategies to new technological ecosystems has proven conceptually difficult. Part of the challenge lies in the tradeoffs involved in regulating innovations that could conceivably both help and hurt consumers and market participants alike. Problems also arise from the common assumption that today\u27s fintech is a mere continuation of the story of innovation that has shaped finance for centuries. This Article provides a novel theoretical framework for understanding and regulating fintech by showing how the supervision of financial innovation is invariably bound by what can be described as a policy Trilemma. Specifically, we argue that when seeking to provide clear rules, maintain market integrity, and encourage financial innovation, regulators have long been able to achieve, at best, two out of the three goals. Moreover, today\u27s innovations exacerbate the tradeoffs historically embodied in the Trilemma by either reconfiguring or disintermediating traditional financing operations and the discrete services supporting them, thereby introducing unprecedented uncertainty as to their risks and benefits. This Article thus proceeds to catalogue the strategies taken by regulatory authorities to navigate the Trilemma, and posits them as operating across a spectrum of interrelated responses. It then proposes supplemental administrative tools to support not only market, but also regulatory data gathering and experimentation

    New Hardware Architecture for Low-Cost Functional Test Systems Applications to HDMI generation

    Get PDF
    English: Development of a new test hardware architecture for functional test systems. Development of a proof-of-concept prototype for HDMI generation.Castellano: Desarrollo de una nueva arquitectura para equipos de test destinados a máquinas de test funcional de PCBs. Desarrollo de un prototipo de demostración destinado a la generación de HDMI.Català: Desenvolupament d'una nova arquitectura per equips de test destinats a màquines de test funcional de PCB. Desenvolupament d'un prototip de demostració destinat a generació d'HDM

    Side-channel attacks and countermeasures in the design of secure IC's devices for cryptographic applications

    Get PDF
    Abstract--- A lot of devices which are daily used have to guarantee the retention of sensible data. Sensible data are ciphered by a secure key by which only the key holder can get the data. For this reason, to protect the cipher key against possible attacks becomes a main issue. The research activities in hardware cryptography are involved in finding new countermeasures against various attack scenarios and, in the same time, in studying new attack methodologies. During the PhD, three different logic families to counteract Power Analysis were presented and a novel class of attacks was studied. Moreover, two different activities related to Random Numbers Generators have been addressed
    • …
    corecore