677 research outputs found
Evaluating the reliability of NAND multiplexing with PRISM
Probabilistic-model checking is a formal verification technique for analyzing the reliability and performance of systems exhibiting stochastic behavior. In this paper, we demonstrate the applicability of this approach and, in particular, the probabilistic-model-checking tool PRISM to the evaluation of reliability and redundancy of defect-tolerant systems in the field of computer-aided design. We illustrate the technique with an example due to von Neumann, namely NAND multiplexing. We show how, having constructed a model of a defect-tolerant system incorporating probabilistic assumptions about its defects, it is straightforward to compute a range of reliability measures and investigate how they are affected by slight variations in the behavior of the system. This allows a designer to evaluate, for example, the tradeoff between redundancy and reliability in the design. We also highlight errors in analytically computed reliability bounds, recently published for the same case study
Evaluating the reliability of NAND multiplexing with PRISM
Probabilistic-model checking is a formal verification technique for analyzing the reliability and performance of systems exhibiting stochastic behavior. In this paper, we demonstrate the applicability of this approach and, in particular, the probabilistic-model-checking tool PRISM to the evaluation of reliability and redundancy of defect-tolerant systems in the field of computer-aided design. We illustrate the technique with an example due to von Neumann, namely NAND multiplexing. We show how, having constructed a model of a defect-tolerant system incorporating probabilistic assumptions about its defects, it is straightforward to compute a range of reliability measures and investigate how they are affected by slight variations in the behavior of the system. This allows a designer to evaluate, for example, the tradeoff between redundancy and reliability in the design. We also highlight errors in analytically computed reliability bounds, recently published for the same case study
Effective Monte Carlo simulation on System-V massively parallel associative string processing architecture
We show that the latest version of massively parallel processing associative
string processing architecture (System-V) is applicable for fast Monte Carlo
simulation if an effective on-processor random number generator is implemented.
Our lagged Fibonacci generator can produce random numbers on a processor
string of 12K PE-s. The time dependent Monte Carlo algorithm of the
one-dimensional non-equilibrium kinetic Ising model performs 80 faster than the
corresponding serial algorithm on a 300 MHz UltraSparc.Comment: 8 pages, 9 color ps figures embedde
A Benes Based NoC Switching Architecture for Mixed Criticality Embedded Systems
Multi-core, Mixed Criticality Embedded (MCE) real-time systems require high
timing precision and predictability to guarantee there will be no interference
between tasks. These guarantees are necessary in application areas such as
avionics and automotive, where task interference or missed deadlines could be
catastrophic, and safety requirements are strict. In modern multi-core systems,
the interconnect becomes a potential point of uncertainty, introducing major
challenges in proving behaviour is always within specified constraints,
limiting the means of growing system performance to add more tasks, or provide
more computational resources to existing tasks.
We present MCENoC, a Network-on-Chip (NoC) switching architecture that
provides innovations to overcome this with predictable, formally verifiable
timing behaviour that is consistent across the whole NoC. We show how the
fundamental properties of Benes networks benefit MCE applications and meet our
architecture requirements. Using SystemVerilog Assertions (SVA), formal
properties are defined that aid the refinement of the specification of the
design as well as enabling the implementation to be exhaustively formally
verified. We demonstrate the performance of the design in terms of size,
throughput and predictability, and discuss the application level considerations
needed to exploit this architecture
Reconfiguration for Fault Tolerance and Performance Analysis
Architecture reconfiguration, the ability of a system to alter the active interconnection among modules, has a history of different purposes and strategies. Its purposes develop from the relatively simple desire to formalize procedures that all processes have in common to reconfiguration for the improvement of fault-tolerance, to reconfiguration for performance enhancement, either through the simple maximizing of system use or by sophisticated notions of wedding topology to the specific needs of a given process.
Strategies range from straightforward redundancy by means of an identical backup system to intricate structures employing multistage interconnection networks. The present discussion surveys the more important contributions to developments in reconfigurable architecture. The strategy here is in a sense to approach the field from an historical perspective, with the goal of developing a more coherent theory of reconfiguration. First, the Turing and von Neumann machines are discussed from the perspective of system reconfiguration, and it is seen that this early important theoretical work contains little that anticipates reconfiguration. Then some early developments in reconfiguration are analyzed, including the work of Estrin and associates on the fixed plus variable restructurable computer system, the attempt to theorize about configurable computers by Miller and Cocke, and the work of Reddi and Feustel on their restructable computer system.
The discussion then focuses on the most sustained systems for fault tolerance and performance enhancement that have been proposed. An attempt will be made to define fault tolerance and to investigate some of the strategies used to achieve it. By investigating four different systems, the Tandern computer, the C.vmp system, the Extra Stage Cube, and the Gamma network, the move from dynamic redundancy to reconfiguration is observed. Then reconfiguration for performance enhancement is discussed. A survey of some proposals is attempted, then the discussion focuses on the most sustained systems that have been proposed: PASM, the DC architecture, the Star local network, and the NYU Ultracomputer. The discussion is organized around a comparison of control, scheduling, communication, and network topology.
Finally, comparisons are drawn between fault tolerance and performance enhancement, in order to clarify the notion of reconfiguration and to reveal the common ground of fault tolerance and performance enhancement as well as the areas in which they diverge. An attempt is made in the conclusion to derive from this survey and analysis some observations on the nature of reconfiguration, as well as some remarks on necessary further areas of research
Recommended from our members
Indirect interconnection networks for high performance routers/switches
Routers form the backbone of the Internet; their kernel, structure, andconfiguration (scheduler) of the backplane (or switching fabrics) dominate the routers’performance, scalability, reliability and cost. As higher performance is required with therapid development of the network applications, router’s architecture has also evolvedfrom the shared backplane to switched backplane, which mainly uses the indirectinterconnection networks.The indirect interconnection networks include crossbar, MIN (multistageinterconnection networks) and some other irregular topologies. At present, most oftoday’s routers and switches are implemented on single crossbar with symmetric bufferarchitecture. In the first part of this dissertation, we introduce novel asymmetric bufferarchitecture for the crossbar in which a new port and a local shared bus are added. Wethen evaluate its performance and simulate under different bus arbitration and buffermanagement algorithms. Our studies indicate that we can get great improvement for thethroughput and low drop rate. Thus we could save a lot of expensive link bandwidth anddecrease the probability of congestion for the network.Single crossbar complexity increases at O(N2) in terms of crosspoint number,which become unacceptable for scalability as the port number (N) increases. A delta classself-routing MIN with complexity of O(N×log2N) has been widely used in the ATMswitches. But the reduction of crosspoint number results in considerable internal blocking.A number of scalable methods have been proposed to solve this problem. One of themuses more stages with recirculation architecture to reroute the deflected packets, whichgreatly increase the latency. In the second part of this dissertation, we propose aninterleaved multistage switching fabrics architecture and assess its throughput with ananalytical model and simulations. We compare this novel scheme with some previousparallel architectures and show its benefits. From extensive simulations under differenttraffic patterns and fault models, our interleaved architecture achieves better performancethan its counterpart of single panel fabric. Our interleaved scheme achieves speedups(over the single panel fabric) of 3.4 and 2.25 under uniform and hot-spot traffic patterns,respectively at maximum load (p=1). Moreover, the interleaved fabrics show greattolerance against internal hardware failures
NASA Tech Briefs, April 2003
Topics include: Tool for Bending a Metal Tube Precisely in a Confined Space; Multiple-Use Mechanisms for Attachment to Seat Tracks; Force-Measuring Clamps; Cellular Pressure-Actuated Joint; Block QCA Fault-Tolerant Logic Gates; Hybrid VLSI/QCA Architecture for Computing FFTs; Arrays of Carbon Nanotubes as RF Filters in Waveguides; Carbon Nanotubes as Resonators for RF Spectrum Analyzers; Software for Viewing Landsat Mosaic Images; Updated Integrated Mission Program; Software for Sharing and Management of Information; Optical-Quality Thin Polymer Membranes; Rollable Thin Shell Composite-Material Paraboloidal Mirrors; Folded Resonant Horns for Power Ultrasonic Applications; Touchdown Ball-Bearing System for Magnetic Bearings; Flux-Based Deadbeat Control of Induction-Motor Torque; Block Copolymers as Templates for Arrays of Carbon Nanotubes; Throttling Cryogen Boiloff To Control Cryostat Temperature; Collaborative Software Development Approach Used to Deliver the New Shuttle Telemetry Ground Station; Turbulence in Supercritical O2/H2 and C7H16/N2 Mixing Layers; and Time-Resolved Measurements in Optoelectronic Microbioanal
Basic hardware interconnection mechanisms for building multiple microcomputer systems
This report presents the current results of a research project which has been concerned with methods for designing and implementing multiple microcomputer systems. The design method is based upon identifying hardware interconnection primitives which may be used to construct the interconnection subsystem which characterizes a given multicomputer architecture. An actual experimental system has been constructed which will permit building nine of ten systems in the Anderson and Jensen architecture taxonomy. (Author)Supported in part by the Department of Electrical Engineering and Computer Science, University of Connecticut, Storrs, Connecticut and in part by
the Department of Computer Science, Naval Postgraduate School, Monterey, Californiahttp://archive.org/details/basichardwareint00careNAApproved for public release; distribution is unlimited
- …