412 research outputs found

    Study of spin-scan imaging for outer planets missions

    Get PDF
    The constraints that are imposed on the Outer Planet Missions (OPM) imager design are of critical importance. Imager system modeling analyses define important parameters and systematic means for trade-offs applied to specific Jupiter orbiter missions. Possible image sequence plans for Jupiter missions are discussed in detail. Considered is a series of orbits that allow repeated near encounters with three of the Jovian satellites. The data handling involved in the image processing is discussed, and it is shown that only minimal processing is required for the majority of images for a Jupiter orbiter mission

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    Digital image compression

    Get PDF

    A Lightweight N-Cover Algorithm For Diagnostic Fail Data Minimization

    Get PDF
    The increasing design complexity of modern ICs has made it extremely difficult and expensive to test them comprehensively. As the transistor count and density of circuits increase, a large volume of fail data is collected by the tester for a single failing IC. The diagnosis procedure analyzes this fail data to give valuable information about the possible defects that may have caused the circuit to fail. However, without any feedback from the diagnosis procedure, the tester may often collect fail data which is potentially not useful for identifying the defects in the failing circuit. This not only consumes tester memory but also increases tester data logging time and diagnosis run time. In this work, we present an algorithm to minimize the amount of fail data used for high quality diagnosis of the failing ICs. The developed algorithm analyzes outputs at which the tests failed and determines which failing tests can be eliminated from the fail data without compromising diagnosis accuracy. The proposed algorithm is used as a preprocessing step between the tester data logs and the diagnosis procedure. The performance of the algorithm was evaluated using fail data from industry manufactured ICs. Experiments demonstrate that on average, 43% of fail data was eliminated by our algorithm while maintaining an average diagnosis accuracy of 93%. With this reduction in fail data, the diagnosis speed was also increased by 46%

    Scalable coding of HDTV pictures using the MPEG coder

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 118-121).by Adnan Husain Lawai.M.S

    Advanced Television Research Program

    Get PDF
    Contains reports on ten research projects.National Science Foundation Grant MIP 87-14969National Science Foundation FellowshipAdvanced Television Research ProgramAT&T Bell Laboratories Doctoral Support ProgramKodak FellowshipU.S. Air Force - Electronic Systems Division Contract F1 9628-89-K-004

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    Synthesis for circuit reliability

    Get PDF
    textElectrical and Computer Engineerin

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications
    • …
    corecore