31,065 research outputs found

    Hybrid receiver study

    Get PDF
    The results are presented of a 4 month study to design a hybrid analog/digital receiver for outer planet mission probe communication links. The scope of this study includes functional design of the receiver; comparisons between analog and digital processing; hardware tradeoffs for key components including frequency generators, A/D converters, and digital processors; development and simulation of the processing algorithms for acquisition, tracking, and demodulation; and detailed design of the receiver in order to determine its size, weight, power, reliability, and radiation hardness. In addition, an evaluation was made of the receiver's capabilities to perform accurate measurement of signal strength and frequency for radio science missions

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Desynchronization: Synthesis of asynchronous circuits from synchronous specifications

    Get PDF
    Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur

    A comprehensive comparison between design for testability techniques for total dose testing of flash-based FPGAs

    Get PDF
    Radiation sources exist in different kinds of environments where electronic devices often operate. Correct device operation is usually affected negatively by radiation. The radiation resultant effect manifests in several forms depending on the operating environment of the device like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). CMOS circuits and Floating gate MOS circuits suffer from an increase in the delay and the leakage current due to TID effect. This may damage the proper operation of the integrated circuit. Exhaustive testing is needed for devices operating in harsh conditions like space and military applications to ensure correct operations in the worst circumstances. The use of worst case test vectors (WCTVs) for testing is strongly recommended by MIL-STD-883, method 1019, which is the standard describing the procedure for testing electronic devices under radiation. However, the difficulty of generating these test vectors hinders their use in radiation testing. Testing digital circuits in the industry is usually done nowadays using design for testability (DFT) techniques as they are very mature and can be relied on. DFT techniques include, but not limited to, ad-hoc technique, built-in self test (BIST), muxed D scan, clocked scan and enhanced scan. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Despite all these recommendations for DFT, radiation testing has not benefited from this reliable technology yet. Also, with the big variation in the DFT techniques, choosing the right technique is the bottleneck to achieve the best results for TID testing. In this thesis, a comprehensive comparison between different DFT techniques for TID testing of flash-based FPGAs is made to help designers choose the best suitable DFT technique depending on their application. The comparison includes muxed D scan technique, clocked scan technique and enhanced scan technique. The comparison is done using ISCAS-89 benchmarks circuits. Points of comparisons include FPGA resources utilization, difficulty of designs bring-up, added delay by DFT logic and robust testable paths in each technique

    Impact Assessment of Hypothesized Cyberattacks on Interconnected Bulk Power Systems

    Full text link
    The first-ever Ukraine cyberattack on power grid has proven its devastation by hacking into their critical cyber assets. With administrative privileges accessing substation networks/local control centers, one intelligent way of coordinated cyberattacks is to execute a series of disruptive switching executions on multiple substations using compromised supervisory control and data acquisition (SCADA) systems. These actions can cause significant impacts to an interconnected power grid. Unlike the previous power blackouts, such high-impact initiating events can aggravate operating conditions, initiating instability that may lead to system-wide cascading failure. A systemic evaluation of "nightmare" scenarios is highly desirable for asset owners to manage and prioritize the maintenance and investment in protecting their cyberinfrastructure. This survey paper is a conceptual expansion of real-time monitoring, anomaly detection, impact analyses, and mitigation (RAIM) framework that emphasizes on the resulting impacts, both on steady-state and dynamic aspects of power system stability. Hypothetically, we associate the combinatorial analyses of steady state on substations/components outages and dynamics of the sequential switching orders as part of the permutation. The expanded framework includes (1) critical/noncritical combination verification, (2) cascade confirmation, and (3) combination re-evaluation. This paper ends with a discussion of the open issues for metrics and future design pertaining the impact quantification of cyber-related contingencies
    • …
    corecore