688 research outputs found

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Distributed product development approaches and system for achieving optimal design.

    Get PDF
    The research in this dissertation attempts to provide theoretic approaches and design systems to support engineers who are located in different places and belong to different teams or companies to work collaboratively to perform product development.The second challenge is addressed by developing a collaborative design process modeling technique based on Petri-net. Petri-net is used to describe complex design processes and to construct different design process alternatives. These alternative Petri-net models are then analyzed to evaluate design process alternatives and to select the appropriate process.In this dissertation, three major challenges are identified in realization of a collaborative design paradigm: (i) development of design method that supports multidisciplinary xi design teams to collaboratively solve coupled design problems, (ii) development of process modeling techniques to support representation and improve complex collaborative design process, and (iii) implementation of a testbed system that demonstrates the feasibility of enhancing current design system to satisfy with the needs of organizing collaborative design process for collaborative decision making and associated design activities.New paradigms, along with accompanying approaches and software systems are necessary to support collaborative design work, in a distributed design environment, of multidisciplinary engineering teams who have different knowledge, experience, and skills. Current research generally focuses on the development of online collaborative tools, and software frameworks that integrate and coordinate these tools. However, a gap exists between the needs of a distributed collaborative design paradigm and current collaborative design tools. On one side, design methodologies facilitating engineering teams' decision making is not well developed. In a distributed collaborative design paradigm, each team holds its own perspective towards the product realization problem, and each team seeks design decisions that can maximize the design performance in its own discipline. Design methodologies that coordinate the separate design decisions are essential to achieve successful collaboration. On the other side, design of products is becoming more complex. Organizing a complex design process is a major obstacle in the application of a distributed collaborative design paradigm in practice. Therefore, the principal research goal in this dissertation is to develop a collaborative multidisciplinary decision making methodology and design process modeling technique that bridges the gap between a collaborative design paradigm and current collaborative design systems.To overcome the first challenge, decision templates are constructed to exchange design information among interacting disciplines. Three game protocols from game theory are utilized to categorize the collaboration in decision makings. Design formulations are used to capture the design freedom among coupled design activities.The third challenge, implementation of collaborative design testbed, is addressed by integration of existing Petri-net modeling tools into the design system. The testbed incorporates optimization software, collaborative design tools, and management software for product and process design to support group design activities.Two product realization examples are presented to demonstrate the applicability of the research and collaborative testbed. A simplified manipulator design example is used for explanation of collaborative decision making and design process organization. And a reverse engineering design example is introduced to verify the application of collaborative design paradigm with design support systems in practice

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    FPGA in image processing supported by IOPT-Flow

    Get PDF
    Image processing is widely used in the most diverse industries. One of the tools widely used to perform image processing is the OpenCV library. Although the implementation of image processing algorithms can be made in software, it is also possible to implement image processing algorithms in hardware. In some cases, the execution time can be smaller than the execution time achieved in software. This work main goal is to evaluate the use of VHDL, DS-Pnets, and IOPT-Flow to develop image processing systems in hardware, in FPGA-based platforms. To enable it, a validation platform was developed. A set of image processing algorithms were specified, during this work, in VHDL and/or in DS-Pnets. These were validated using the IOPT-Flow validation tool and/or the Xilinx ISE Simulator. The automatic VHDL code generator from IOPT-Flow framework was used to translate DS-Pnet models into the implementation code. The FPGA-based implementations were compared with software implementations, supported by the OpenCV library. The created DS-Pnet models were added into a folder of the IOPT-Flow editor, to create an image processing library. It was possible to conclude that the DS-Pnets and their associated tools, IOPT-Flow tools, support the development of image processing systems. These tools, which simplify the development of image processing systems, are available online at http://gres.uninova.pt/iopt-flow/

    Interactive modelling and simulation of heterogeneous systems

    Get PDF
    corecore