97 research outputs found

    DESIGN AUTOMATION FOR LOW POWER RFID TAGS

    Get PDF
    Radio Frequency Identification (RFID) tags are small, wireless devices capable of automated item identification, used in a variety of applications including supply chain management, asset management, automatic toll collection (EZ Pass), etc. However, the design of these types of custom systems using the traditional methods can take months for a hardware engineer to develop and debug. In this dissertation, an automated, low-power flow for the design of RFID tags has been developed, implemented and validated. This dissertation presents the RFID Compiler, which permits high-level design entry using a simple description of the desired primitives and their behavior in ANSI-C. The compiler has different back-ends capable of targeting microprocessor-based or custom hardware-based tags. For the hardware-based tag, the back-end automatically converts the user-supplied behavior in C to low power synthesizable VHDL optimized for RFID applications. The compiler also integrates a fast, high-level power macromodeling flow, which can be used to generate power estimates within 15% accuracy of industry CAD tools and to optimize the primitives and / or the behaviors, compared to conventional practices. Using the RFID Compiler, the user can develop the entire design in a matter of days or weeks. The compiler has been used to implement standards such as ANSI, ISO 18000-7, 18000-6C and 18185-7. The automatically generated tag designs were validated by targeting microprocessors such as the AD Chips EISC and FPGAs such as Xilinx Spartan 3. The corresponding ASIC implementation is comparable to the conventionally designed commercial tags in terms of the energy and area. Thus, the RFID Compiler permits the design of power efficient, custom RFID tags by a wider audience with a dramatically reduced design cycle

    Platform-based design, test and fast verification flow for mixed-signal systems on chip

    Get PDF
    This research is providing methodologies to enhance the design phase from architectural space exploration and system study to verification of the whole mixed-signal system. At the beginning of the work, some innovative digital IPs have been designed to develop efficient signal conditioning for sensor systems on-chip that has been included in commercial products. After this phase, the main focus has been addressed to the creation of a re-usable and versatile test of the device after the tape-out which is close to become one of the major cost factor for ICs companies, strongly linking it to model’s test-benches to avoid re-design phases and multi-environment scenarios, producing a very effective approach to a single, fast and reliable multi-level verification environment. All these works generated different publications in scientific literature. The compound scenario concerning the development of sensor systems is presented in Chapter 1, together with an overview of the related market with a particular focus on the latest MEMS and MOEMS technology devices, and their applications in various segments. Chapter 2 introduces the state of the art for sensor interfaces: the generic sensor interface concept (based on sharing the same electronics among similar applications achieving cost saving at the expense of area and performance loss) versus the Platform Based Design methodology, which overcomes the drawbacks of the classic solution by keeping the generality at the highest design layers and customizing the platform for a target sensor achieving optimized performances. An evolution of Platform Based Design achieved by implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform is therefore presented. ISIF is a highly configurable mixed-signal chip which allows designers to perform an effective design space exploration and to evaluate directly on silicon the system performances avoiding the critical and time consuming analysis required by standard platform based approach. In chapter 3 we describe the design of a smart sensor interface for conditioning next generation MOEMS. The adoption of a new, high performance and high integrated technology allow us to integrate not only a versatile platform but also a powerful ARM processor and various IPs providing the possibility to use the platform not only as a conditioning platform but also as a processing unit for the application. In this chapter a description of the various blocks is given, with a particular emphasis on the IP developed in order to grant the highest grade of flexibility with the minimum area occupation. The architectural space evaluation and the application prototyping with ISIF has enabled an effective, rapid and low risk development of a new high performance platform achieving a flexible sensor system for MEMS and MOEMS monitoring and conditioning. The platform has been design to cover very challenging test-benches, like a laser-based projector device. In this way the platform will not only be able to effectively handle the sensor but also all the system that can be built around it, reducing the needed for further electronics and resulting in an efficient test bench for the algorithm developed to drive the system. The high costs in ASIC development are mainly related to re-design phases because of missing complete top-level tests. Analog and digital parts design flows are separately verified. Starting from these considerations, in the last chapter a complete test environment for complex mixed-signal chips is presented. A semi-automatic VHDL-AMS flow to provide totally matching top-level is described and then, an evolution for fast self-checking test development for both model and real chip verification is proposed. By the introduction of a Python interface, the designer can easily perform interactive tests to cover all the features verification (e.g. calibration and trimming) into the design phase and check them all with the same environment on the real chip after the tape-out. This strategy has been tested on a consumer 3D-gyro for consumer application, in collaboration with SensorDynamics AG

    A VLSI synthesis of a Reed-Solomon processor for digital communication systems

    Get PDF
    The Reed-Solomon codes have been widely used in digital communication systems such as computer networks, satellites, VCRs, mobile communications and high- definition television (HDTV), in order to protect digital data against erasures, random and burst errors during transmission. Since the encoding and decoding algorithms for such codes are computationally intensive, special purpose hardware implementations are often required to meet the real time requirements. -- One motivation for this thesis is to investigate and introduce reconfigurable Galois field arithmetic structures which exploit the symmetric properties of available architectures. Another is to design and implement an RS encoder/decoder ASIC which can support a wide family of RS codes. -- An m-programmable Galois field multiplier which uses the standard basis representation of the elements is first introduced. It is then demonstrated that the exponentiator can be used to implement a fast inverter which outperforms the available inverters in GF(2m). Using these basic structures, an ASIC design and synthesis of a reconfigurable Reed-Solomon encoder/decoder processor which implements a large family of RS codes is proposed. The design is parameterized in terms of the block length n, Galois field symbol size m, and error correction capability t for the various RS codes. The design has been captured using the VHDL hardware description language and mapped onto CMOS standard cells available in the 0.8-µm BiCMOS design kits for Cadence and Synopsys tools. The experimental chip contains 218,206 logic gates and supports values of the Galois field symbol size m = 3,4,5,6,7,8 and error correction capability t = 1,2,3, ..., 16. Thus, the block length n is variable from 7 to 255. Error correction t and Galois field symbol size m are pin-selectable. -- Since low design complexity and high throughput are desired in the VLSI chip, the algebraic decoding technique has been investigated instead of the time or transform domain. The encoder uses a self-reciprocal generator polynomial which structures the codewords in a systematic form. At the beginning of the decoding process, received words are initially stored in the first-in-first-out (FIFO) buffer as they enter the syndrome module. The Berlekemp-Massey algorithm is used to determine both the error locator and error evaluator polynomials. The Chien Search and Forney's algorithms operate sequentially to solve for the error locations and error values respectively. The error values are exclusive or-ed with the buffered messages in order to correct the errors, as the processed data leave the chip

    Design and Implementation of Fault Tolerant Adders on Field Programmable Gate Arrays

    Get PDF
    Fault tolerance on various adder architectures implemented on Field Programmable Gate Arrays (FPGAs) is studied in this thesis. This involves developing error detection and correction techniques for the sparse Kogge-Stone adder and comparing it with Triple Modular Redundancy (TMR) techniques. Fault tolerance is implemented on a Kogge-Stone adder by taking advantage of the inherent redundancy in the carry tree. On a sparse Kogge-Stone adder, fault tolerance is realized by introducing additional ripple carry adders into the design. The implementation of this fault tolerance approach on the sparse Kogge-Stone adder is successfully completed and verified by introducing faults either on the ripple carry adder or in the carry tree. Two types of Xilinx FPGAs were used in this study: the Spartan 3E and Virtex 5. The fault tolerant adders were analyzed in terms of their delay and resource utilization as a function of the widths of the adders. The results of this research provide important design guidelines for the implementation of fault tolerant adders on FPGAs. The Triple Modular Redundancy-Ripple Carry Adder (TMR-RCA) is the most efficient approach for fault tolerant design on an FPGA in terms of its resources due to its simplicity and the ability to take advantage of the fast-carry chain. However, for very large bit widths, there are indications that the sparse Kogge-Stone adder offers superior performance over an RCA when implemented on an FPGA. Two fault tolerant approaches were implemented using a sparse Kogge-Stone architecture. First, a fault tolerant sparse Kogge-Stone adder is designed by taking advantage of the existing ripple carry adders in the architecture and adopting a similar approach to the TMR-RCA by inserting two additional ripple carry adders into the design. Second, a graceful degradation approach is implemented with the sparse Kogge-Stone adder. In this approach, a faulty block is permanently replaced with a spare block. As the spare block is initially used for fault checking, the fault tolerant capability of the circuit is degraded in order to continue fault-free operation. The adder delay is smaller for the graceful degradation approach by approximately 1 ns from measured results and 2 ns from the synthesis results independent of the bit widths when compared with the fault tolerant Kogge-Stone adder. However, the resource utilization is similar for both adders

    Digital electronic predistortion for optical communications

    Get PDF
    The distortion of optical signals has long been an issue limiting the performance of communication systems. With the increase of transmission speeds the effects of distortion are becoming more prominent. Because of this, the use of methods known from digital signal processing (DSP) are being introduced to compensate for them. Applying DSP to improve optical signals has been limited by a discrepancy in digital signal processing speeds and optical transmission speeds. However high speed Field Programmable Gate Arrays (FPGA) which are sufficiently fast have now become available making DSP experiments without costly ASIC implementation possible for optical transmission experiments. This thesis focuses on Look Up Table (LUT) based digital Electronic Predistortion (EPD) for optical transmission. Because it is only one out of many possible implementations of EPD, it has to be placed in context with other EPD techniques and other distortion combating techniques in general, especially since it is possible to combine the different techniques. Building an actual transmitter means that compromises and decisions have to be made in the design and implementation of an EPD based system. These are based on balancing the desire to achieve optimal performance with technological and economic limitations. This is partly done using optical simulations to asses the performance. This thesis describes a novel experimental transmitter that has been built as part of this research applying LUT based EPD to an optical signal. The experimental transmitter consists of a digital design (using a hardware description language) for a pair of FPGAs and an analogue optical/electronic setup including two standard DAC integrated circuits. The DSP in the transmitter compensated for both chromatic dispersion and self phase modulation. We achieved transmission of 10.7 Gb/s non-return-to-zero (NRZ) signals with a +4 dBm launch power over 450 km keeping the required optical-signal-to-noise-ratio (OSNR) for a bit-error-rate of 2x10^{-3} below 11 dB. In doing so we showed experimentally, for the first time, that nonlinear effects can be compensated with this approach and that the combination of FPGA-DAC is a viable approach for an experimental setup

    Automatic low-cost IP watermarking technique based on output mark insertions

    No full text
    International audienceToday, although intellectual properties (IP) and their reuse are common, their use is causing design security issues: illegal copying, counterfeiting, and reverse engineering. IP watermarking is an efficient way to detect an unauthorized IP copy or a counterfeit. In this context, many interesting solutions have been proposed. However, few combine the watermarking process with synthesis. This article presents a new solution, i.e. automatic low cost IP watermarking included in the high-level synthesis process. The proposed method differs from those cited in the literature as the marking is not material, but is based on mathematical relationships between numeric values as inputs and outputs at specified times. Some implementation results with Xilinx Virtex-5 FPGA that the proposed solution required a lower area and timing overhead than existing solutions
    corecore