966 research outputs found

    FPGA design methodology for industrial control systems—a review

    Get PDF
    This paper reviews the state of the art of fieldprogrammable gate array (FPGA) design methodologies with a focus on industrial control system applications. This paper starts with an overview of FPGA technology development, followed by a presentation of design methodologies, development tools and relevant CAD environments, including the use of portable hardware description languages and system level programming/design tools. They enable a holistic functional approach with the major advantage of setting up a unique modeling and evaluation environment for complete industrial electronics systems. Three main design rules are then presented. These are algorithm refinement, modularity, and systematic search for the best compromise between the control performance and the architectural constraints. An overview of contributions and limits of FPGAs is also given, followed by a short survey of FPGA-based intelligent controllers for modern industrial systems. Finally, two complete and timely case studies are presented to illustrate the benefits of an FPGA implementation when using the proposed system modeling and design methodology. These consist of the direct torque control for induction motor drives and the control of a diesel-driven synchronous stand-alone generator with the help of fuzzy logic

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    Image sequence restoration by median filtering

    Get PDF
    Median filters are non-linear filters that fit in the generic category of order-statistic filters. Median filters are widely used for reducing random defects, commonly characterized by impulse or salt and pepper noise in a single image. Motion estimation is the process of estimating the displacement vector between like pixels in the current frame and the reference frame. When dealing with a motion sequence, the motion vectors are the key for operating on corresponding pixels in several frames. This work explores the use of various motion estimation algorithms in combination with various median filter algorithms to provide noise suppression. The results are compared using two sets of metrics: performance-based and objective image quality-based. These results are used to determine the best motion estimation / median filter combination for image sequence restoration. The primary goals of this work are to implement a motion estimation and median filter algorithm in hardware and develop and benchmark a flexible software alternative restoration process. There are two unique median filter algorithms to this work. The first filter is a modification to a single frame adaptive median filter. The modification applied motion compensation and temporal concepts. The other is an adaptive extension to the multi-level (ML3D) filter, called adaptive multi-level (AML3D) filter. The extension provides adaptable filter window sizes to the multiple filter sets that comprise the ML3D filter. The adaptive median filter is capable of filtering an image in 26.88 seconds per frame and results in a PSNR improvement of 5.452dB. The AML3D is capable of filtering an image in 14.73 seconds per frame and results in a PSNR improvement of 6.273dB. The AML3D is a suitable alternative to the other median filters

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Adaptive Computationally Scalable Motion Estimation for the Hardware H.264/AVC Encoder

    Full text link

    Neuromorphic deep convolutional neural network learning systems for FPGA in real time

    Get PDF
    Deep Learning algorithms have become one of the best approaches for pattern recognition in several fields, including computer vision, speech recognition, natural language processing, and audio recognition, among others. In image vision, convolutional neural networks stand out, due to their relatively simple supervised training and their efficiency extracting features from a scene. Nowadays, there exist several implementations of convolutional neural networks accelerators that manage to perform these networks in real time. However, the number of operations and power consumption of these implementations can be reduced using a different processing paradigm as neuromorphic engineering. Neuromorphic engineering field studies the behavior of biological and inner systems of the human neural processing with the purpose of design analog, digital or mixed-signal systems to solve problems inspired in how human brain performs complex tasks, replicating the behavior and properties of biological neurons. Neuromorphic engineering tries to give an answer to how our brain is capable to learn and perform complex tasks with high efficiency under the paradigm of spike-based computation. This thesis explores both frame-based and spike-based processing paradigms for the development of hardware architectures for visual pattern recognition based on convolutional neural networks. In this work, two FPGA implementations of convolutional neural networks accelerator architectures for frame-based using OpenCL and SoC technologies are presented. Followed by a novel neuromorphic convolution processor for spike-based processing paradigm, which implements the same behaviour of leaky integrate-and-fire neuron model. Furthermore, it reads the data in rows being able to perform multiple layers in the same chip. Finally, a novel FPGA implementation of Hierarchy of Time Surfaces algorithm and a new memory model for spike-based systems are proposed

    Micro Aerial Vehicles (MAV) Assured Navigation in Search and Rescue Missions Robust Localization, Mapping and Detection

    Get PDF
    This Master's Thesis describes the developments on robust localization, mapping and detection algorithms for Micro Aerial Vehicles (MAVs). The localization method proposes a seamless indoor-outdoor multi-sensor architecture. This algorithm is capable of using all or a subset of its sensor inputs to determine a platform's position, velocity and attitude (PVA). It relies on the Inertial Measurement Unit as the core sensor and monitors the status and observability of the secondary sensors to select the most optimum estimator strategy for each situation. Furthermore, it ensures a smooth transition between filters structures. This document also describes the integration mechanism for a set of common sensors such as GNSS receivers, laser scanners and stereo and mono cameras. The mapping algorithm provides a fully automated fast aerial mapping pipeline. It speeds up the process by pre-selecting the images using the flight plan and the onboard localization. Furthermore, it relies on Structure from Motion (SfM) techniques to produce an optimized 3D reconstruction of camera locations and sparse scene geometry. These outputs are used to compute the perspective transformations that project the raw images on the ground and produce a geo-referenced map. Finally, these maps are fused with other domains in a collaborative UGV and UAV mapping algorithms. The real-time aerial detection of victims is based on a thermal camera. The algorithm is composed by three steps. Firstly, a normalization of the image is performed to get rid of the background and to extract the regions of interest. Later the victim detection and tracking steps produce the real-time geo-referenced locations of the detections. The thesis also proposes the concept of a MAV Copilot, a payload composed by a set of sensors and algorithm the enhances the capabilities of any commercial MAV. To develop and validate these contributions, a prototype of a search and rescue MAV and the Copilot has been developed. These developments have been validated in three large-scale demonstrations of search and rescue operations in the context of the European project ICARUS: a shipwreck in Lisbon (Portugal), an earthquake in Marche (Belgium), and the Fukushima nuclear disaster in the euRathlon 2015 competition in Piombino (Italy)

    Architecture and algorithms for the implementation of digital wireless receivers in FPGA and ASIC: ISDB-T and DVB-S2 cases

    Full text link
    [EN] The first generation of Terrestrial Digital Television(DTV) has been in service for over a decade. In 2013, several countries have already completed the transition from Analog to Digital TV Broadcasting, most of which in Europe. In South America, after several studies and trials, Brazil adopted the Japanese standard with some innovations. Japan and Brazil started Digital Terrestrial Television Broadcasting (DTTB) services in December 2003 and December 2007 respectively, using Integrated Services Digital Broadcasting - Terrestrial (ISDB-T), also known as ARIB STD-B31. In June 2005 the Committee for the Information Technology Area (CATI) of Brazilian Ministry of Science and Technology and Innovation MCTI approved the incorporation of the IC-Brazil Program, in the National Program for Microelectronics (PNM) . The main goals of IC-Brazil are the formal qualification of IC designers, support to the creation of semiconductors companies focused on projects of ICs within Brazil, and the attraction of semiconductors companies focused on the design and development of ICs in Brazil. The work presented in this thesis originated from the unique momentum created by the combination of the birth of Digital Television in Brazil and the creation of the IC-Brazil Program by the Brazilian government. Without this combination it would not have been possible to make these kind of projects in Brazil. These projects have been a long and costly journey, albeit scientifically and technologically worthy, towards a Brazilian DTV state-of-the-art low complexity Integrated Circuit, with good economy scale perspectives, due to the fact that at the beginning of this project ISDB-T standard was not adopted by several countries like DVB-T. During the development of the ISDB-T receiver proposed in this thesis, it was realized that due to the continental dimensions of Brazil, the DTTB would not be enough to cover the entire country with open DTV signal, specially for the case of remote localizations far from the high urban density regions. Then, Eldorado Research Institute and Idea! Electronic Systems, foresaw that, in a near future, there would be an open distribution system for high definition DTV over satellite, in Brazil. Based on that, it was decided by Eldorado Research Institute, that would be necessary to create a new ASIC for broadcast satellite reception. At that time DVB-S2 standard was the strongest candidate for that, and this assumption still stands nowadays. Therefore, it was decided to apply to a new round of resources funding from the MCTI - that was granted - in order to start the new project. This thesis discusses in details the Architecture and Algorithms proposed for the implementation of a low complexity Intermediate Frequency(IF) ISDB-T Receiver on Application Specific Integrated Circuit (ASIC) CMOS. The Architecture proposed here is highly based on the COordinate Rotation Digital Computer (CORDIC) Algorithm, that is a simple and efficient algorithm suitable for VLSI implementations. The receiver copes with the impairments inherent to wireless channels transmission and the receiver crystals. The thesis also discusses the Methodology adopted and presents the implementation results. The receiver performance is presented and compared to those obtained by means of simulations. Furthermore, the thesis also presents the Architecture and Algorithms for a DVB-S2 receiver targeting its ASIC implementation. However, unlike the ISDB-T receiver, only preliminary ASIC implementation results are introduced. This was mainly done in order to have an early estimation of die area to prove that the project in ASIC is economically viable, as well as to verify possible bugs in early stage. As in the case of ISDB-T receiver, this receiver is highly based on CORDIC algorithm and it was prototyped in FPGA. The Methodology used for the second receiver is derived from that used for the ISDB-T receiver, with minor additions given the project characteristics.[ES] La primera generación de Televisión Digital Terrestre(DTV) ha estado en servicio por más de una década. En 2013, varios países completaron la transición de transmisión analógica a televisión digital, la mayoría de ellas en Europa. En América del Sur, después de varios estudios y ensayos, Brasil adoptó el estándar japonés con algunas innovaciones. Japón y Brasil comenzaron a prestar el servicio de Difusión de Televisión Digital Terrestre (DTTB) en diciembre de 2003 y diciembre de 2007 respectivamente, utilizando Radiodifusión Digital de Servicios Integrados Terrestres (ISDB-T), también conocida como ARIB STD-B31. En junio de 2005, el Comité del Área de Tecnología de la Información (CATI) del Ministerio de Ciencia, Tecnología e Innovación de Brasil - MCTI aprobó la incorporación del Programa CI-Brasil, en el Programa Nacional de Microelectrónica (PNM). Los principales objetivos de la CI-Brasil son la formación de diseñadores de CIs, apoyar la creación de empresas de semiconductores enfocadas en proyectos de circuitos integrados dentro de Brasil, y la atracción de empresas de semiconductores interesadas en el diseño y desarrollo de circuitos integrados. El trabajo presentado en esta tesis se originó en el impulso único creado por la combinación del nacimiento de la televisión digital en Brasil y la creación del Programa de CI-Brasil por el gobierno brasileño. Sin esta combinación no hubiera sido posible realizar este tipo de proyectos en Brasil. Estos proyectos han sido un trayecto largo y costoso, aunque meritorio desde el punto de vista científico y tecnológico, hacia un Circuito Integrado brasileño de punta y de baja complejidad para DTV, con buenas perspectivas de economía de escala debido al hecho que al inicio de este proyecto, el estándar ISDB-T no fue adoptado por varios países como DVB-T. Durante el desarrollo del receptor ISDB-T propuesto en esta tesis, se observó que debido a las dimensiones continentales de Brasil, la DTTB no sería suficiente para cubrir todo el país con la señal de televisión digital abierta, especialmente para el caso de localizaciones remotas, apartadas de las regiones de alta densidad urbana. En ese momento, el Instituto de Investigación Eldorado e Idea! Sistemas Electrónicos, previeron que en un futuro cercano habría un sistema de distribución abierto para DTV de alta definición por satélite en Brasil. Con base en eso, el Instituto de Investigación Eldorado decidió que sería necesario crear un nuevo ASIC para la recepción de radiodifusión por satélite, basada el estándar DVB-S2. En esta tesis se analiza en detalle la Arquitectura y algoritmos propuestos para la implementación de un receptor ISDB-T de baja complejidad y frecuencia intermedia (IF) en un Circuito Integrado de Aplicación Específica (ASIC) CMOS. La arquitectura aquí propuesta se basa fuertemente en el algoritmo Computadora Digital para Rotación de Coordenadas (CORDIC), el cual es un algoritmo simple, eficiente y adecuado para implementaciones VLSI. El receptor hace frente a las deficiencias inherentes a las transmisiones por canales inalámbricos y los cristales del receptor. La tesis también analiza la metodología adoptada y presenta los resultados de la implementación. Por otro lado, la tesis también presenta la arquitectura y los algoritmos para un receptor DVB-S2 dirigido a la implementación en ASIC. Sin embargo, a diferencia del receptor ISDB-T, se introducen sólo los resultados preliminares de implementación en ASIC. Esto se hizo principalmente con el fin de tener una estimación temprana del área del die para demostrar que el proyecto en ASIC es económicamente viable, así como para verificar posibles errores en etapa temprana. Como en el caso de receptor ISDB-T, este receptor se basa fuertemente en el algoritmo CORDIC y fue un prototipado en FPGA. La metodología utilizada para el segundo receptor se deriva de la utilizada para el re[CA] La primera generació de Televisió Digital Terrestre (TDT) ha estat en servici durant més d'una dècada. En 2013, diversos països ja van completar la transició de la radiodifusió de televisió analògica a la digital, i la majoria van ser a Europa. A Amèrica del Sud, després de diversos estudis i assajos, Brasil va adoptar l'estàndard japonés amb algunes innovacions. Japó i Brasil van començar els servicis de Radiodifusió de Televisió Terrestre Digital (DTTB) al desembre de 2003 i al desembre de 2007, respectivament, utilitzant la Radiodifusió Digital amb Servicis Integrats de (ISDB-T), coneguda com a ARIB STD-B31. Al juny de 2005, el Comité de l'Àrea de Tecnologia de la Informació (CATI) del Ministeri de Ciència i Tecnologia i Innovació del Brasil (MCTI) va aprovar la incorporació del programa CI Brasil al Programa Nacional de Microelectrònica (PNM). Els principals objectius de CI Brasil són la qualificació formal dels dissenyadors de circuits integrats, el suport a la creació d'empreses de semiconductors centrades en projectes de circuits integrats dins del Brasil i l'atracció d'empreses de semiconductors centrades en el disseny i desenvolupament de circuits integrats. El treball presentat en esta tesi es va originar en l'impuls únic creat per la combinació del naixement de la televisió digital al Brasil i la creació del programa Brasil CI pel govern brasiler. Sense esta combinació no hauria estat possible realitzar este tipus de projectes a Brasil. Estos projectes han suposat un viatge llarg i costós, tot i que digne científicament i tecnològica, cap a un circuit integrat punter de baixa complexitat per a la TDT brasilera, amb bones perspectives d'economia d'escala perquè a l'inici d'este projecte l'estàndard ISDB-T no va ser adoptat per diversos països, com el DVB-T. Durant el desenvolupament del receptor de ISDB-T proposat en esta tesi, va resultar que, a causa de les dimensions continentals de Brasil, la DTTB no seria suficient per cobrir tot el país amb el senyal de TDT oberta, especialment pel que fa a les localitzacions remotes allunyades de les regions d'alta densitat urbana.. En este moment, l'Institut de Recerca Eldorado i Idea! Sistemes Electrònics van preveure que, en un futur pròxim, no hi hauria a Brasil un sistema de distribució oberta de TDT d'alta definició a través de satèl¿lit. D'acord amb això, l'Institut de Recerca Eldorado va decidir que seria necessari crear un nou ASIC per a la recepció de radiodifusió per satèl¿lit. basat en l'estàndard DVB-S2. En esta tesi s'analitza en detall l'arquitectura i els algorismes proposats per l'execució d'un receptor ISDB-T de Freqüència Intermèdia (FI) de baixa complexitat sobre CMOS de Circuit Integrat d'Aplicacions Específiques (ASIC). L'arquitectura ací proposada es basa molt en l'algorisme de l'Ordinador Digital de Rotació de Coordenades (CORDIC), que és un algorisme simple i eficient adequat per implementacions VLSI. El receptor fa front a les deficiències inherents a la transmissió de canals sense fil i els cristalls del receptor. Esta tesi també analitza la metodologia adoptada i presenta els resultats de l'execució. Es presenta el rendiment del receptor i es compara amb els obtinguts per mitjà de simulacions. D'altra banda, esta tesi també presenta l'arquitectura i els algorismes d'un receptor de DVB-S2 de cara a la seua implementació en ASIC. No obstant això, a diferència del receptor ISDB-T, només s'introdueixen resultats preliminars d'implementació en ASIC. Això es va fer principalment amb la finalitat de tenir una estimació primerenca de la zona de dau per demostrar que el projecte en ASIC és econòmicament viable, així com per verificar possibles errors en l'etapa primerenca. Com en el cas del receptor ISDB-T, este receptor es basa molt en l'algorisme CORDIC i va ser un prototip de FPGA. La metodologia utilitzada per al segon receptor es deriva de la utilitzada per al receptor IRodrigues De Lima, E. (2016). Architecture and algorithms for the implementation of digital wireless receivers in FPGA and ASIC: ISDB-T and DVB-S2 cases [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61967TESI
    corecore