6,177 research outputs found

    Hardware for digitally controlled scanned probe microscopes

    Get PDF
    The design and implementation of a flexible and modular digital control and data acquisition system for scanned probe microscopes (SPMs) is presented. The measured performance of the system shows it to be capable of 14-bit data acquisition at a 100-kHz rate and a full 18-bit output resolution resulting in less than 0.02-Å rms position noise while maintaining a scan range in excess of 1 µm in both the X and Y dimensions. This level of performance achieves the goal of making the noise of the microscope control system an insignificant factor for most experiments. The adaptation of the system to various types of SPM experiments is discussed. Advances in audio electronics and digital signal processors have made the construction of such high performance systems possible at low cost

    Novel Front-end Electronics for Time Projection Chamber Detectors

    Full text link
    Este trabajo ha sido realizado en la Organización Europea para la Investigación Nuclear (CERN) y forma parte del proyecto de investigación Europeo para futuros aceleradores lineales (EUDET). En física de partículas existen diferentes categorías de detectores de partículas. El diseño presentado esta centrado en un tipo particular de detector de trayectoria de partículas denominado TPC (Time Projection Chamber) que proporciona una imagen en tres dimensiones de las partículas eléctricamente cargadas que atraviesan su volumen gaseoso. La tesis incluye un estudio de los objetivos para futuros detectores, resumiendo los parámetros que un sistema de adquisición de datos debe cumplir en esos casos. Además, estos requisitos son comparados con los actuales sistemas de lectura utilizados en diferentes detectores TPC. Se concluye que ninguno de los sistemas cumple las restrictivas condiciones. Algunos de los principales objetivos para futuros detectores TPC son un altísimo nivel de integración, incremento del número de canales, electrónica más rápida y muy baja potencia. El principal inconveniente del estado del arte de los sistemas anteriores es la utilización de varios circuitos integrados en la cadena de adquisición. Este hecho hace imposible alcanzar el altísimo nivel de integración requerido para futuros detectores. Además, un aumento del número de canales y frecuencia de muestreo haría incrementar hasta valores no permitidos la potencia utilizada. Y en consecuencia, incrementar la refrigeración necesaria (en caso de ser posible). Una de las novedades presentadas es la integración de toda la cadena de adquisición (filtros analógicos de entrada, conversor analógico-digital (ADC) y procesado de señal digital) en un único circuito integrado en tecnología de 130nm. Este chip es el primero que realiza esta altísima integración para detectores TPC. Por otro lado, se presenta un análisis detallado de los filtros de procesado de señal. Los objetivos más importantes es la reduccióGarcía García, EJ. (2012). Novel Front-end Electronics for Time Projection Chamber Detectors [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16980Palanci

    An EMI characterization and modeling study for consumer electronics and integrated circuits

    Get PDF
    “As internet-of-things (IoT) applications surge, wireless connectivity becomes an essential part of the network. Smart home, one of the most promising application scenarios of IoT, will improve our life quality enormously. However, electromagnetic interference (EMI) to the receiving antenna, either from another electronic product or from a module/an integrated circuit(IC) inside the same wireless device, will degrade the performance of wireless connectivity, thus influencing the user experience. Characterization and modeling of the EMI become increasingly important. In the first part, an improved method to extract equivalent dipoles from magnitude- only electromagnetic-field data based on the genetic algorithm and back-and-forth iteration algorithm is proposed. The method provides an automatic flow to extract the equivalent dipoles from electromagnetic-field data on arbitrarily shaped scanning surfaces and minimizes the number of extracted dipoles. In the second part, both the differential mode (DM) and common mode (CM) EMI below 1 MHz from the ac-dc power supply in a LED TV is analyzed and modeled. Through joint time-frequency analysis, the drain-to-source voltage of the power MOSFET in the power factor correction (PFC) converter is identified as the dominant noise source of both CM and DM EMI below 1 MHz from the power supply. The current paths of DM and CM EMI are explained and modeled by a linear equivalent circuit model. In the last part, the noise source and current path of the conducted CM EMI noise from a Qi-compliant wireless power transfer (WPT) system for mobile applications are analyzed. The analysis and modeling explain the mechanism of the CM EMI noise and provide guidelines to reduce the CM EMI noise”--Abstract, page iv

    A mixed-signal ASIC for time and charge measurements with GEM detectors

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Test Strategies for Low Power Devices

    Get PDF
    Ultra low-power devices are being developed for embedded applications in bio-medical electronics, wireless sensor networks, environment monitoring and protection, etc. The testing of these low-cost, low-power devices is a daunting task. Depending on the target application, there are stringent guidelines on the number of defective parts per million shipped devices. At the same time, since such devices are cost-sensitive, test cost is a major consideration. Since system-level power-management techniques are employed in these devices, test generation must be power-management-aware to avoid stressing the power distribution infrastructure in the test mode. Structural test techniques such as scan test, with or without compression, can result in excessive heat dissipation during testing and damage the package. False failures may result due to the electrical and thermal stressing of the device in the test mode of operation, leading to yield loss. This paper considers different aspects of testing low-power devices and some new techniques to address these problems.Design, Automation and Test in Europe (DATE \u2708), 10-14 March 2008, Munich, German

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Embedding Logic and Non-volatile Devices in CMOS Digital Circuits for Improving Energy Efficiency

    Get PDF
    abstract: Static CMOS logic has remained the dominant design style of digital systems for more than four decades due to its robustness and near zero standby current. Static CMOS logic circuits consist of a network of combinational logic cells and clocked sequential elements, such as latches and flip-flops that are used for sequencing computations over time. The majority of the digital design techniques to reduce power, area, and leakage over the past four decades have focused almost entirely on optimizing the combinational logic. This work explores alternate architectures for the flip-flops for improving the overall circuit performance, power and area. It consists of three main sections. First, is the design of a multi-input configurable flip-flop structure with embedded logic. A conventional D-type flip-flop may be viewed as realizing an identity function, in which the output is simply the value of the input sampled at the clock edge. In contrast, the proposed multi-input flip-flop, named PNAND, can be configured to realize one of a family of Boolean functions called threshold functions. In essence, the PNAND is a circuit implementation of the well-known binary perceptron. Unlike other reconfigurable circuits, a PNAND can be configured by simply changing the assignment of signals to its inputs. Using a standard cell library of such gates, a technology mapping algorithm can be applied to transform a given netlist into one with an optimal mixture of conventional logic gates and threshold gates. This approach was used to fabricate a 32-bit Wallace Tree multiplier and a 32-bit booth multiplier in 65nm LP technology. Simulation and chip measurements show more than 30% improvement in dynamic power and more than 20% reduction in core area. The functional yield of the PNAND reduces with geometry and voltage scaling. The second part of this research investigates the use of two mechanisms to improve the robustness of the PNAND circuit architecture. One is the use of forward and reverse body biases to change the device threshold and the other is the use of RRAM devices for low voltage operation. The third part of this research focused on the design of flip-flops with non-volatile storage. Spin-transfer torque magnetic tunnel junctions (STT-MTJ) are integrated with both conventional D-flipflop and the PNAND circuits to implement non-volatile logic (NVL). These non-volatile storage enhanced flip-flops are able to save the state of system locally when a power interruption occurs. However, manufacturing variations in the STT-MTJs and in the CMOS transistors significantly reduce the yield, leading to an overly pessimistic design and consequently, higher energy consumption. A detailed analysis of the design trade-offs in the driver circuitry for performing backup and restore, and a novel method to design the energy optimal driver for a given yield is presented. Efficient designs of two nonvolatile flip-flop (NVFF) circuits are presented, in which the backup time is determined on a per-chip basis, resulting in minimizing the energy wastage and satisfying the yield constraint. To achieve a yield of 98%, the conventional approach would have to expend nearly 5X more energy than the minimum required, whereas the proposed tunable approach expends only 26% more energy than the minimum. A non-volatile threshold gate architecture NV-TLFF are designed with the same backup and restore circuitry in 65nm technology. The embedded logic in NV-TLFF compensates performance overhead of NVL. This leads to the possibility of zero-overhead non-volatile datapath circuits. An 8-bit multiply-and- accumulate (MAC) unit is designed to demonstrate the performance benefits of the proposed architecture. Based on the results of HSPICE simulations, the MAC circuit with the proposed NV-TLFF cells is shown to consume at least 20% less power and area as compared to the circuit designed with conventional DFFs, without sacrificing any performance.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Research on performance enhancement for electromagnetic analysis and power analysis in cryptographic LSI

    Get PDF
    制度:新 ; 報告番号:甲3785号 ; 学位の種類:博士(工学) ; 授与年月日:2012/11/19 ; 早大学位記番号:新6161Waseda Universit

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Fast Access Data Acquisition System

    Full text link
    corecore