6 research outputs found

    Digital signal processing: the impact of convergence on education, society and design flow

    Get PDF
    Design and development of real-time, memory and processor hungry digital signal processing systems has for decades been accomplished on general-purpose microprocessors. Increasing needs for high-performance DSP systems made these microprocessors unattractive for such implementations. Various attempts to improve the performance of these systems resulted in the use of dedicated digital signal processing devices like DSP processors and the former heavyweight champion of electronics design – Application Specific Integrated Circuits. The advent of RAM-based Field Programmable Gate Arrays has changed the DSP design flow. Software algorithmic designers can now take their DSP algorithms right from inception to hardware implementation, thanks to the increasing availability of software/hardware design flow or hardware/software co-design. This has led to a demand in the industry for graduates with good skills in both Electrical Engineering and Computer Science. This paper evaluates the impact of technology on DSP-based designs, hardware design languages, and how graduate/undergraduate courses have changed to suit this transition

    FPGA design methodology for industrial control systems—a review

    Get PDF
    This paper reviews the state of the art of fieldprogrammable gate array (FPGA) design methodologies with a focus on industrial control system applications. This paper starts with an overview of FPGA technology development, followed by a presentation of design methodologies, development tools and relevant CAD environments, including the use of portable hardware description languages and system level programming/design tools. They enable a holistic functional approach with the major advantage of setting up a unique modeling and evaluation environment for complete industrial electronics systems. Three main design rules are then presented. These are algorithm refinement, modularity, and systematic search for the best compromise between the control performance and the architectural constraints. An overview of contributions and limits of FPGAs is also given, followed by a short survey of FPGA-based intelligent controllers for modern industrial systems. Finally, two complete and timely case studies are presented to illustrate the benefits of an FPGA implementation when using the proposed system modeling and design methodology. These consist of the direct torque control for induction motor drives and the control of a diesel-driven synchronous stand-alone generator with the help of fuzzy logic

    Efficient design and implementation of image processing algorithms on reconfigurable hardware using Handel-C

    Full text link
    Computer manipulation of images is generally defined as Digital Image Processing (DIP). DIP is used in variety of applications, including video surveillance, target recognition, and image enhancement. These applications are usually implemented in software but may use special purpose hardware for speed. With advances in the VLSI technology hardware implementation has become an attractive alternative. Assigning complex computation tasks to hardware and exploiting the parallelism and pipelining in algorithms yield significant speedup in running times. In this thesis the image processing algorithms like median filter, basic morphological operators, convolution and edge detection algorithms are implemented on FPGA. A pipelined architecture of these algorithms is presented. The proposed architectures are capable of producing one output on every clock cycle. The hardware modeling was accomplished using Handel-C (DK2 environment). The algorithm was tested on standard image processing benchmarks and the results are compared with that obtained on software

    High-level synthesis of triple modular redundant FPGA circuits with energy efficient error recovery mechanisms

    Full text link
    There is a growing interest in deploying commercial SRAM-based Field Programmable Gate Array (FPGA) circuits in space due to their low cost, reconfigurability, high logic capacity and rich I/O interfaces. However, their configuration memory (CM) is vulnerable to ionising radiation which raises the need for effective fault-tolerant design techniques. This thesis provides the following contributions to mitigate the negative effects of soft errors in SRAM FPGA circuits. Triple Modular Redundancy (TMR) with periodic CM scrubbing or Module-based CM error recovery (MER) are popular techniques for mitigating soft errors in FPGA circuits. However, this thesis shows that MER does not recover CM soft errors in logic instantiated outside the reconfigurable regions of TMR modules. To address this limitation, a hybrid error recovery mechanism, namely FMER, is proposed. FMER uses selective periodic scrubbing and MER to recover CM soft errors inside and outside the reconfigurable regions of TMR modules, respectively. Experimental results indicate that TMR circuits with FMER achieve higher dependability with less energy consumption than those using periodic scrubbing or MER alone. An imperative component of MER and FMER is the reconfiguration control network (RCN) that transfers the minority reports of TMR components, i.e., which, if any, TMR module needs recovery, to the FPGA's reconfiguration controller (RC). Although several reliable RCs have been proposed, a study of reliable RCNs has not been previously reported. This thesis fills this research gap, by proposing a technique that transfers the circuit's minority reports to the RC via the configuration-layer of the FPGA. This reduces the resource utilisation of the RCN and therefore its failure rate. Results show that the proposed RCN achieves higher reliability than alternative RCN architectures reported in the literature. The last contribution of this thesis is a high-level synthesis (HLS) tool, namely TLegUp, developed within the LegUp HLS framework. TLegUp triplicates Xilinx 7-series FPGA circuits during HLS rather than during the register-transfer level pre- or post-synthesis flow stage, as existing computer-aided design tools do. Results show that TLegUp can generate non-partitioned TMR circuits with 500x less soft error sensitivity than non-triplicated functional equivalent baseline circuits, while utilising 3-4x more resources and having 11% lower frequency

    Features extraction for low-power face verification

    Get PDF
    Mobile communication devices now available on the market, such as so-called smartphones, are far more advanced than the first cellular phones that became very popular one decade ago. In addition to their historical purpose, namely enabling wireless vocal communications to be established nearly everywhere, they now provide most of the functionalities offered by computers. As such, they hold an ever-increasing amount of personal information and confidential data. However, the authentication method employed to prevent unauthorized access to the device is still based on the same PIN code mechanism, which is often set to an easy-to-guess combination of digits, or even altogether disabled. Stronger security can be achieved by resorting to biometrics, which verifies the identity of a person based on intrinsic physical or behavioral characteristics. Since most mobile phones are now equipped with an image sensor to provide digital camera functionality, biometric authentication based on the face modality is very interesting as it does not require a dedicated sensor, unlike e.g. fingerprint verification. Its perceived intrusiveness is furthermore very low, and it is generally well accepted by users. The deployment of face verification on mobile devices however requires overcoming two major challenges, which are the main issues addressed in this PhD thesis. Firstly, images acquired by a handheld device in an uncontrolled environment exhibit strong variations in illumination conditions. The extracted features on which biometric identification is based must therefore be robust to such perturbations. Secondly, the amount of energy available on battery-powered mobile devices is tightly constrained, calling for algorithms with low computational complexity, and for highly optimized implementations. So as to reduce the dependency on the illumination conditions, a low-complexity normalization technique for features extraction based on mathematical morphology is introduced in this thesis, and evaluated in conjunction with the Elastic Graph Matching (EGM) algorithm. Robustness to other perturbations, such as occlusions or geometric transformations, is also assessed and several improvements are proposed. In order to minimize the power consumption, the hardware architecture of a coprocessor dedicated to features extraction is proposed and described in VHDL. This component is designed to be integrated into a System-on-Chip (SoC) implementing the complete face verification process, including image acquisition, thereby enabling biometric face authentication to be performed entirely on the mobile device. Comparison of the proposed solution with state-of-the-art academic results and recently disclosed commercial products shows that the chosen approach is indeed much more efficient energy-wise
    corecore