19 research outputs found

    An optimized MAC based architecture for adaptive digital filter

    Get PDF
    906-915Filter design in signal processing field plays a vital role in achieving low power dissipation, which is essential for portable gadgets. This paper proposes an effective flexible FIR filter structure, which is adaptive and utilizes multiply–accumulate (MAC) core. Most common algorithm for filter coefficient optimization includes least mean square (LMS) and recursive least square (RLS). Though the performance of the recursive least square (RLS) algorithm is superior as compared to the least mean square (LMS); because of higher arithmetic complexity in design, it has not been preferred for real time applications. The fundamental filter has used a LMS based tapped delay line filter, which is practically a feasible choice for adaptive filtering algorithm in order to attain lesser computation. In the proposed work, the adjustable coefficient filters using an optimized LMS approach has been implemented for the utilization of determining the unexplored system. The filter tap considered here is a 32-tap and its analysis and synthesis has been carried out using hardware description language (HDL) programming and synthesized in field programmable gate array (FPGA) devices. The placement and post routing design has offered good performance in terms of utilized resources. The implemented filter architecture requires 80% reduction in resources and has enhanced the clock frequency by about five times when examined with the reported architecture

    An optimized MAC based architecture for adaptive digital filter

    Get PDF
    Filter design in signal processing field plays a vital role in achieving low power dissipation, which is essential for portable gadgets. This paper proposes an effective flexible FIR filter structure, which is adaptive and utilizes multiply–accumulate (MAC) core. Most common algorithm for filter coefficient optimization includes least mean square (LMS) and recursive least square (RLS). Though the performance of the recursive least square (RLS) algorithm is superior as compared to the least mean square (LMS); because of higher arithmetic complexity in design, it has not been preferred for real time applications. The fundamental filter has used a LMS based tapped delay line filter, which is practically a feasible choice for adaptive filtering algorithm in order to attain lesser computation. In the proposed work, the adjustable coefficient filters using an optimized LMS approach has been implemented for the utilization of determining the unexplored system. The filter tap considered here is a 32-tap and its analysis and synthesis has been carried out using hardware description language (HDL) programming and synthesized in field programmable gate array (FPGA) devices. The placement and post routing design has offered good performance in terms of utilized resources. The implemented filter architecture requires 80% reduction in resources and has enhanced the clock frequency by about five times when examined with the reported architecture

    REAL-TIME ADAPTIVE PULSE COMPRESSION ON RECONFIGURABLE, SYSTEM-ON-CHIP (SOC) PLATFORMS

    Get PDF
    New radar applications need to perform complex algorithms and process a large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low-power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression algorithms for real-time transceiver optimization is presented, and is based on a System-on-Chip architecture for reconfigurable hardware devices. This study also evaluates the performance of dedicated coprocessors as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion, which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through high-performance buses, to perform floating-point operations, control the processing blocks, and communicate with an external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band testbed together with a low-cost channel emulator for different types of waveforms

    Flexible Hardware Architectures for Retinal Image Analysis

    Get PDF
    RÉSUMÉ Des millions de personnes autour du monde sont touchées par le diabète. Plusieurs complications oculaires telle que la rétinopathie diabétique sont causées par le diabète, ce qui peut conduire à une perte de vision irréversible ou même la cécité si elles ne sont pas traitées. Des examens oculaires complets et réguliers par les ophtalmologues sont nécessaires pour une détection précoce des maladies et pour permettre leur traitement. Comme solution préventive, un protocole de dépistage impliquant l'utilisation d'images numériques du fond de l'œil a été adopté. Cela permet aux ophtalmologistes de surveiller les changements sur la rétine pour détecter toute présence d'une maladie oculaire. Cette solution a permis d'obtenir des examens réguliers, même pour les populations des régions éloignées et défavorisées. Avec la grande quantité d'images rétiniennes obtenues, des techniques automatisées pour les traiter sont devenues indispensables. Les techniques automatisées de détection des maladies des yeux ont été largement abordées par la communauté scientifique. Les techniques développées ont atteint un haut niveau de maturité, ce qui a permis entre autre le déploiement de solutions en télémédecine. Dans cette thèse, nous abordons le problème du traitement de volumes élevés d'images rétiniennes dans un temps raisonnable dans un contexte de dépistage en télémédecine. Ceci est requis pour permettre l'utilisation pratique des techniques développées dans le contexte clinique. Dans cette thèse, nous nous concentrons sur deux étapes du pipeline de traitement des images rétiniennes. La première étape est l'évaluation de la qualité de l'image rétinienne. La deuxième étape est la segmentation des vaisseaux sanguins rétiniens. L’évaluation de la qualité des images rétinienne après acquisition est une tâche primordiale au bon fonctionnement de tout système de traitement automatique des images de la rétine. Le rôle de cette étape est de classifier les images acquises selon leurs qualités, et demander une nouvelle acquisition en cas d’image de mauvaise qualité. Plusieurs algorithmes pour évaluer la qualité des images rétiniennes ont été proposés dans la littérature. Cependant, même si l'accélération de cette tâche est requise en particulier pour permettre la création de systèmes mobiles de capture d'images rétiniennes, ce sujet n'a pas encore été abordé dans la littérature. Dans cette thèse, nous ciblons un algorithme qui calcule les caractéristiques des images pour permettre leur classification en mauvaise, moyenne ou bonne qualité. Nous avons identifié le calcul des caractéristiques de l'image comme une tâche répétitive qui nécessite une accélération. Nous nous sommes intéressés plus particulièrement à l’accélération de l’algorithme d’encodage à longueur de séquence (Run-Length Matrix – RLM). Nous avons proposé une première implémentation complètement logicielle mise en œuvre sous forme d’un système embarqué basé sur la technologie Zynq de Xilinx. Pour accélérer le calcul des caractéristiques, nous avons conçu un co-processeur capable de calculer les caractéristiques en parallèle implémenté sur la logique programmable du FPGA Zynq. Nous avons obtenu une accélération de 30,1 × pour la tâche de calcul des caractéristiques de l’algorithme RLM par rapport à son implémentation logicielle sur la plateforme Zynq. La segmentation des vaisseaux sanguins rétiniens est une tâche clé dans le pipeline du traitement des images de la rétine. Les vaisseaux sanguins et leurs caractéristiques sont de bons indicateurs de la santé de la rétine. En outre, leur segmentation peut également aider à segmenter les lésions rouges, indicatrices de la rétinopathie diabétique. Plusieurs techniques de segmentation des vaisseaux sanguins rétiniens ont été proposées dans la littérature. Des architectures matérielles ont également été proposées pour accélérer certaines de ces techniques. Les architectures existantes manquent de performances et de flexibilité de programmation, notamment pour les images de haute résolution. Dans cette thèse, nous nous sommes intéressés à deux techniques de segmentation du réseau vasculaire rétinien, la technique du filtrage adapté et la technique des opérateurs de ligne. La technique de filtrage adapté a été ciblée principalement en raison de sa popularité. Pour cette technique, nous avons proposé deux architectures différentes, une architecture matérielle personnalisée mise en œuvre sur FPGA et une architecture basée sur un ASIP. L'architecture matérielle personnalisée a été optimisée en termes de surface et de débit de traitement pour obtenir des performances supérieures par rapport aux implémentations existantes dans la littérature. Cette implémentation est plus efficace que toutes les implémentations existantes en termes de débit. Pour l'architecture basée sur un processeur à jeu d’instructions spécialisé (Application-Specific Instruction-set Processor – ASIP), nous avons identifié deux goulets d'étranglement liés à l'accès aux données et à la complexité des calculs de l'algorithme. Nous avons conçu des instructions spécifiques ajoutées au chemin de données du processeur. L'ASIP a été rendu 7.7 × plus rapide par rapport à son architecture de base. La deuxième technique pour la segmentation des vaisseaux sanguins est l'algorithme détecteur de ligne multi-échelle (Multi-Scale Ligne Detector – MSLD). L'algorithme MSLD est choisi en raison de ses performances et de son potentiel à détecter les petits vaisseaux sanguins. Cependant, l'algorithme fonctionne en multi-échelle, ce qui rend l’algorithme gourmand en mémoire. Pour résoudre ce problème et permettre l'accélération de son exécution, nous avons proposé un algorithme efficace en terme de mémoire, conçu et implémenté sur FPGA. L'architecture proposée a réduit de façon drastique les exigences de l’algorithme en terme de mémoire en réutilisant les calculs et la co-conception logicielle/matérielle. Les deux architectures matérielles proposées pour la segmentation du réseau vasculaire rétinien ont été rendues flexibles pour pouvoir traiter des images de basse et de haute résolution. Ceci a été réalisé par le développement d'un compilateur spécifique capable de générer une description HDL de bas niveau de l'algorithme à partir d'un ensemble de paramètres. Le compilateur nous a permis d’optimiser les performances et le temps de développement. Dans cette thèse, nous avons introduit deux architectures qui sont, au meilleur de nos connaissances, les seules capables de traiter des images à la fois de basse et de haute résolution.----------ABSTRACT Millions of people all around the world are affected by diabetes. Several ocular complications such as diabetic retinopathy are caused by diabetes, which can lead to irreversible vision loss or even blindness if not treated. Regular comprehensive eye exams by eye doctors are required to detect the diseases at earlier stages and permit their treatment. As a preventing solution, a screening protocol involving the use of digital fundus images was adopted. This allows eye doctors to monitor changes in the retina to detect any presence of eye disease. This solution made regular examinations widely available, even to populations in remote and underserved areas. With the resulting large amount of retinal images, automated techniques to process them are required. Automated eye detection techniques are largely addressed by the research community, and now they reached a high level of maturity, which allows the deployment of telemedicine solutions. In this thesis, we are addressing the problem of processing a high volume of retinal images in a reasonable time. This is mandatory to allow the practical use of the developed techniques in a clinical context. In this thesis, we focus on two steps of the retinal image pipeline. The first step is the retinal image quality assessment. The second step is the retinal blood vessel segmentation. The evaluation of the quality of the retinal images after acquisition is a primary task for the proper functioning of any automated retinal image processing system. The role of this step is to classify the acquired images according to their quality, which will allow an automated system to request a new acquisition in case of poor quality image. Several algorithms to evaluate the quality of retinal images were proposed in the literature. However, even if the acceleration of this task is required, especially to allow the creation of mobile systems for capturing retinal images, this task has not yet been addressed in the literature. In this thesis, we target an algorithm that computes image features to allow their classification to bad, medium or good quality. We identified the computation of image features as a repetitive task that necessitates acceleration. We were particularly interested in accelerating the Run-Length Matrix (RLM) algorithm. We proposed a first fully software implementation in the form of an embedded system based on Xilinx's Zynq technology. To accelerate the features computation, we designed a co-processor able to compute the features in parallel, implemented on the programmable logic of the Zynq FPGA. We achieved an acceleration of 30.1× over its software implementation for the features computation part of the RLM algorithm. Retinal blood vessel segmentation is a key task in the pipeline of retinal image processing. Blood vessels and their characteristics are good indicators of retina health. In addition, their segmentation can also help to segment the red lesions, indicators of diabetic retinopathy. Several techniques have been proposed in the literature to segment retinal blood vessels. Hardware architectures have also been proposed to accelerate blood vessel segmentation. The existing architectures lack in terms of performance and programming flexibility, especially for high resolution images. In this thesis, we targeted two techniques, matched filtering and line operators. The matched filtering technique was targeted mainly because of its popularity. For this technique, we proposed two different architectures, a custom hardware architecture implemented on FPGA, and an Application Specific Instruction-set Processor (ASIP) based architecture. The custom hardware architecture area and timing were optimized to achieve higher performances in comparison to existing implementations. Our custom hardware implementation outperforms all existing implementations in terms of throughput. For the ASIP based architecture, we identified two bottlenecks related to data access and computation intensity of the algorithm. We designed two specific instructions added to the processor datapath. The ASIP was made 7.7× more efficient in terms of execution time compared to its basic architecture. The second technique for blood vessel segmentation is the Multi-Scale Line Detector (MSLD) algorithm. The MSLD algorithm is selected because of its performance and its potential to detect small blood vessels. However, the algorithm works at multiple scales which makes it memory intensive. To solve this problem and allow the acceleration of its execution, we proposed a memory-efficient algorithm designed and implemented on FPGA. The proposed architecture reduces drastically the memory requirements of the algorithm by reusing the computations and SW/HW co-design. The two hardware architectures proposed for retinal blood vessel segmentation were made flexible to be able to process low and high resolution images. This was achieved by the development of a specific compiler able to generate low-level HDL descriptions of the algorithm from a set of the algorithm parameters. The compiler enabled us to optimize performance and development time. In this thesis, we introduce two novel architectures which are, to the best of our knowledge, the only ones able to process both low and high resolution images

    Low Power Digital Filter Implementation in FPGA

    Get PDF
    Digital filters suitable for hearing aid application on low power perspective have been developed and implemented in FPGA in this dissertation. Hearing aids are primarily meant for improving hearing and speech comprehensions. Digital hearing aids score over their analog counterparts. This happens as digital hearing aids provide flexible gain besides facilitating feedback reduction and noise elimination. Recent advances in DSP and Microelectronics have led to the development of superior digital hearing aids. Many researchers have investigated several algorithms suitable for hearing aid application that demands low noise, feedback cancellation, echo cancellation, etc., however the toughest challenge is the implementation. Furthermore, the additional constraints are power and area. The device must consume as minimum power as possible to support extended battery life and should be as small as possible for increased portability. In this thesis we have made an attempt to investigate possible digital filter algorithms those are hardware configurable on low power view point. Suitability of decimation filter for hearing aid application is investigated. In this dissertation decimation filter is implemented using ‘Distributed Arithmetic’ approach.While designing this filter, it is observed that, comb-half band FIR-FIR filter design uses less hardware compared to the comb-FIR-FIR filter design. The power consumption is also less in case of comb-half band FIR-FIR filter design compared to the comb-FIR-FIR filter. This filter is implemented in Virtex-II pro board from Xilinx and the resource estimator from the system generator is used to estimate the resources. However ‘Distributed Arithmetic’ is highly serial in nature and its latency is high; power consumption found is not very low in this type of filter implementation. So we have proceeded for ‘Adaptive Hearing Aid’ using Booth-Wallace tree multiplier. This algorithm is also implemented in FPGA and power calculation of the whole system is done using Xilinx Xpower analyser. It is observed that power consumed by the hearing aid with Booth-Wallace tree multiplier is less than the hearing aid using Booth multiplier (about 25%). So we can conclude that the hearing aid using Booth-Wallace tree multiplier consumes less power comparatively. The above two approached are purely algorithmic approach. Next we proceed to combine circuit level VLSI design and with algorithmic approach for further possible reduction in power. A MAC based FDF-FIR filter (algorithm) that uses dual edge triggered latch (DET) (circuit) is used for hearing aid device. It is observed that DET based MAC FIR filter consumes less power than the traditional (single edge triggered, SET) one (about 41%). The proposed low power latch provides a power saving upto 65% in the FIR filter. This technique consumes less power compared to previous approaches that uses low power technique only at algorithmic abstraction level. The DET based MAC FIR filter is tested for real-time validation and it is observed that it works perfectly for various signals (speech, music, voice with music). The gain of the filter is tested and is found to be 27 dB (maximum) that matches with most of the hearing aid (manufacturer’s) specifications. Hence it can be concluded that FDF FIR digital filter in conjunction with low power latch is a strong candidate for hearing aid application

    Nascom System Development Plan: System Description, Capabilities and Plans

    Get PDF
    The NASA Communications (Nascom) System Development Plan (NSDP), reissued annually, describes the organization of Nascom, how it obtains communication services, its current systems, its relationship with other NASA centers and International Partner Agencies, some major spaceflight projects which generate significant operational communication support requirements, and major Nascom projects in various stages of development or implementation

    Etude, Modélisation et Amélioration des Performances desConvertisseurs Analogique Numérique Entrelacés dans le Temps

    Get PDF
    Dans un contexte où les systèmes communicants fleurissent, les Convertisseurs Analogique Numérique CAN doivent suivre les demandes des nouveaux standards de télécommunications. Un convertisseur seul, ne peut pas allier rapidité, précision et faible consommation de puissance. Dans le cadre de nos travaux, nous somme intéressé à une structure prometteuse de CAN basée sur l'entrelacement temporel de plusieurs convertisseurs, TIADC. Le taux d'échantillonnage augmente proportionnellement avec le nombre de CAN mais des problèmes de disparité entre les différents CAN réduisent la résolution effective du TIADC. Dans ce mémoire, nous avons contribuer à l'étude de ces convertisseurs, notamment aux pertes engendrées par les disparités entre les différents convertisseurs. La structure du TIADC a été modélisé dans un environnement de description matérielle. Plusieurs solutions de calibrations existantes ont été simulé afin de vérifier leur fonctionnement et de pouvoir proposer deux méthodes de correction. Une première méthode en différé visant le domaine de l'instrumentation et une seconde, en ligne visant des application de élécommunications. La première méthode a été vérifié par des données expérimentales, la seconde était implémenté dans un FPGA et vérifié par des tests et des mesures

    Exploiting Natural On-chip Redundancy for Energy Efficient Memory and Computing

    Get PDF
    Power density is currently the primary design constraint across most computing segments and the main performance limiting factor. For years, industry has kept power density constant, while increasing frequency, lowering transistors supply (Vdd) and threshold (Vth) voltages. However, Vth scaling has stopped because leakage current is exponentially related to it. Transistor count and integration density keep doubling every process generation (Moore’s Law), but the power budget caps the amount of hardware that can be active at the same time, leading to dark silicon. With each new generation, there are more resources available, but we cannot fully exploit their performance potential. In the last years, different research trends have explored how to cope with dark silicon and unlock the energy efficiency of the chips, including Near-Threshold voltage Computing (NTC) and approximate computing. NTC aggressively lowers Vdd to values near Vth. This allows a substantial reduction in power, as dynamic power scales quadratically with supply voltage. The resultant power reduction could be used to activate more chip resources and potentially achieve performance improvements. Unfortunately, Vdd scaling is limited by the tight functionality margins of on-chip SRAM transistors. When scaling Vdd down to values near-threshold, manufacture-induced parameter variations affect the functionality of SRAM cells, which eventually become not reliable. A large amount of emerging applications, on the other hand, features an intrinsic error-resilience property, tolerating a certain amount of noise. In this context, approximate computing takes advantage of this observation and exploits the gap between the level of accuracy required by the application and the level of accuracy given by the computation, providing that reducing the accuracy translates into an energy gain. However, deciding which instructions and data and which techniques are best suited for approximation still poses a major challenge. This dissertation contributes in these two directions. First, it proposes a new approach to mitigate the impact of SRAM failures due to parameter variation for effective operation at ultra-low voltages. We identify two levels of natural on-chip redundancy: cache level and content level. The first arises because of the replication of blocks in multi-level cache hierarchies. We exploit this redundancy with a cache management policy that allocates blocks to entries taking into account the nature of the cache entry and the use pattern of the block. This policy obtains performance improvements between 2% and 34%, with respect to block disabling, a technique with similar complexity, incurring no additional storage overhead. The latter (content level redundancy) arises because of the redundancy of data in real world applications. We exploit this redundancy compressing cache blocks to fit them in partially functional cache entries. At the cost of a slight overhead increase, we can obtain performance within 2% of that obtained when the cache is built with fault-free cells, even if more than 90% of the cache entries have at least a faulty cell. Then, we analyze how the intrinsic noise tolerance of emerging applications can be exploited to design an approximate Instruction Set Architecture (ISA). Exploiting the ISA redundancy, we explore a set of techniques to approximate the execution of instructions across a set of emerging applications, pointing out the potential of reducing the complexity of the ISA, and the trade-offs of the approach. In a proof-of-concept implementation, the ISA is shrunk in two dimensions: Breadth (i.e., simplifying instructions) and Depth (i.e., dropping instructions). This proof-of-concept shows that energy can be reduced on average 20.6% at around 14.9% accuracy loss

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches

    Impacto das comunicações M2M em redes celulares de telecomunicações

    Get PDF
    Mestrado em Engenharia Electrónica e de TelecomunicaçõesAs comunicações Máquina-Máquina (M2M) apresentam um crescimento muito significativo e algumas projeções apontam para que esta tendência se acentue drasticamente ao longo dos próximos anos. O tráfego gerado por este tipo de comunicações tem caraterísticas muito diferentes do tráfego de dados, ou voz, que atualmente circula nas redes celulares de telecomunicações. Assim, é fundamental estudar as caraterísticas dos tipos de tráfego associados com comunicações M2M, por forma a compreender os efeitos que tais caraterísticas podem provocar nas redes celulares de telecomunicações. Esta dissertação procura identificar e estudar algumas das caraterísticas do tráfego M2M, com especial enfoque na sinalização gerada por serviços M2M. Como resultado principal deste trabalho surge o desenvolvimento de modelos que permitem a construção de uma ferramenta analítica de orquestração de serviços e análise de rede. Esta ferramenta permite orquestrar serviços e modelar padrões de tráfego numa rede UMTS, possibilitando uma análise simultânea aos efeitos produzidos no segmento core da mesma rede. Ao longo deste trabalho procura-se que a abordagem aos problemas apresentados permita que os resultados obtidos sejam válidos, ou adaptáveis, num âmbito mais abrangente do que apenas as comunicações M2M.Machine to Machine (M2M) communications present significant growth and some projections indicate that this trend is going to increase dramatically over the coming years. The traffic generated by this type of communication has very different characteristics when compared to data or voice traffic currently going through cellular telecommunications networks. Thus, it is essential to study the characteristics of traffic associated with M2M communications in order to understand the effects that its features can imply to cellular telecommunications networks. This dissertation tries to identify and study some of the characteristics of M2M traffic, with particular focus on signaling generated by M2M services. A number of models, that enable the development of an analytic tool for service orchestration and network analysis, are presented. This tool enables service orchestration and traffic modeling on a UMTS network, with simultaneous visualization of the impacts on the core of such network. The work presented in this document seeks to approach the problems at study in ways ensuring that its outcomes are valid for a wider scope than just M2M communications
    corecore