18 research outputs found

    Parallel implementations of adaptive filters

    Get PDF

    Survey on Different Architectures of DLMS Adaptive Filter

    Get PDF
    ABSTRACT: The LMS Adaptive filter has wide range of applications but it cannot support pipelined operations because of its recursive behaviour. An efficient architecture of Delayed LMS algorithm thus has been proposed that supports pipelining. The paper discuses about the DLMS algorithm and different architectures of DLMS adaptive filter in brief. These architectures concentrates on increasing usable frequency, minimize the adaption delay and area. The comparison between these various architectures based on critical path, hardware elements required is discussed. KEYWORDS: Adaptive Filter, LMS algorithm, DLMS Algorithm, Critical Path, Pipelining. I.INTRODUCTION The adaptive filter itself can adjust its transfer function according to an optimizing algorithm and object can be achieved by the modification of its characteristics. They provide flexibility and accuracy in the field of communication and control. The LMS (Least Mean Square) algorithm is widely used because of its easy calculations and better convergence performance. LMS adaptive filter has wide range of applications in communication and DSP (Digital Signal Processing) such as predictor, system identification, noise cancellation, equalization. However the direct implementation of LMS algorithm has long critical path which is due to the complex inner product computations to obtain the filter outpu

    Bit-level pipelined digit-serial array processors

    Get PDF
    A new architecture for high performance digit-serial vector inner product (VIP) which can be pipelined to the bit-level is introduced. The design of the digit-serial vector inner product is based on a new systematic design methodology using radix-2n arithmetic. The proposed architecture allows a high level of bit-level pipelining to increase the throughput rate with minimum initial delay and minimum area. This will give designers greater flexibility in finding the best tradeoff between hardware cost and throughput rate. It is shown that sub-digit pipelined digit-serial structure can achieve a higher throughput rate with much less area consumption than an equivalent bit-parallel structure. A twin-pipe architecture to double the throughput rate of digit-serial multipliers and consequently that of the digit-serial vector inner product is also presented. The effect of the number of pipelining levels and the twin-pipe architecture on the throughput rate and hardware cost are discussed. A two's complement digit-serial architecture which can operate on both negative and positive numbers is also presented

    REAL-TIME ADAPTIVE PULSE COMPRESSION ON RECONFIGURABLE, SYSTEM-ON-CHIP (SOC) PLATFORMS

    Get PDF
    New radar applications need to perform complex algorithms and process a large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low-power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression algorithms for real-time transceiver optimization is presented, and is based on a System-on-Chip architecture for reconfigurable hardware devices. This study also evaluates the performance of dedicated coprocessors as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion, which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through high-performance buses, to perform floating-point operations, control the processing blocks, and communicate with an external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band testbed together with a low-cost channel emulator for different types of waveforms

    On the Development of Distributed Estimation Techniques for Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) have lately witnessed tremendous demand, as evidenced by the increasing number of day-to-day applications. The sensor nodes aim at estimating the parameters of their corresponding adaptive filters to achieve the desired response for the event of interest. Some of the burning issues related to linear parameter estimation in WSNs have been addressed in this thesis mainly focusing on reduction of communication overhead and latency, and robustness to noise. The first issue deals with the high communication overhead and latency in distributed parameter estimation techniques such as diffusion least mean squares (DLMS) and incremental least mean squares (ILMS) algorithms. Subsequently the poor performance demonstrated by these distributed techniques in presence of impulsive noise has been dealt separately. The issue of source localization i.e. estimation of source bearing in WSNs, where the existing decentralized algorithms fail to perform satisfactorily, has been resolved in this thesis. Further the same issue has been dealt separately independent of nodal connectivity in WSNs. This thesis proposes two algorithms namely the block diffusion least mean squares (BDLMS) and block incremental least mean squares (BILMS) algorithms for reducing the communication overhead in WSNs. The theoretical and simulation studies demonstrate that BDLMS and BILMS algorithms provide the same performances as that of DLMS and ILMS, but with significant reduction in communication overheads per node. The latency also reduces by a factor as high as the block-size used in the proposed algorithms. With an aim to develop robustness towards impulsive noise, this thesis proposes three robust distributed algorithms i.e. saturation nonlinearity incremental LMS (SNILMS), saturation nonlinearity diffusion LMS (SNDLMS) and Wilcoxon norm diffusion LMS (WNDLMS) algorithms. The steady-state analysis of SNILMS algorithm is carried out based on spatial-temporal energy conservation principle. The theoretical and simulation results show that these algorithms are robust to impulsive noise. The SNDLMS algorithm is found to provide better performance than SNILMS and WNDLMS algorithms. In order to develop a distributed source localization technique, a novel diffusion maximum likelihood (ML) bearing estimation algorithm is proposed in this thesis which needs less communication overhead than the centralized algorithms. After forming a random array with its neighbours, each sensor node estimates the source bearing by optimizing the ML function locally using a diffusion particle swarm optimization algorithm. The simulation results show that the proposed algorithm performs better than the centralized multiple signal classification (MUSIC) algorithm in terms of probability of resolution and root mean square error. Further, in order to make the proposed algorithm independent of nodal connectivity, a distributed in-cluster bearing estimation technique is proposed. Each cluster of sensors estimates the source bearing by optimizing the ML function locally in cooperation with other clusters. The simulation results demonstrate improved performance of the proposed method in comparison to the centralized and decentralized MUSIC algorithms, and the distributed in-network algorith

    Flexible Hardware Architectures for Retinal Image Analysis

    Get PDF
    RÉSUMÉ Des millions de personnes autour du monde sont touchées par le diabète. Plusieurs complications oculaires telle que la rétinopathie diabétique sont causées par le diabète, ce qui peut conduire à une perte de vision irréversible ou même la cécité si elles ne sont pas traitées. Des examens oculaires complets et réguliers par les ophtalmologues sont nécessaires pour une détection précoce des maladies et pour permettre leur traitement. Comme solution préventive, un protocole de dépistage impliquant l'utilisation d'images numériques du fond de l'œil a été adopté. Cela permet aux ophtalmologistes de surveiller les changements sur la rétine pour détecter toute présence d'une maladie oculaire. Cette solution a permis d'obtenir des examens réguliers, même pour les populations des régions éloignées et défavorisées. Avec la grande quantité d'images rétiniennes obtenues, des techniques automatisées pour les traiter sont devenues indispensables. Les techniques automatisées de détection des maladies des yeux ont été largement abordées par la communauté scientifique. Les techniques développées ont atteint un haut niveau de maturité, ce qui a permis entre autre le déploiement de solutions en télémédecine. Dans cette thèse, nous abordons le problème du traitement de volumes élevés d'images rétiniennes dans un temps raisonnable dans un contexte de dépistage en télémédecine. Ceci est requis pour permettre l'utilisation pratique des techniques développées dans le contexte clinique. Dans cette thèse, nous nous concentrons sur deux étapes du pipeline de traitement des images rétiniennes. La première étape est l'évaluation de la qualité de l'image rétinienne. La deuxième étape est la segmentation des vaisseaux sanguins rétiniens. L’évaluation de la qualité des images rétinienne après acquisition est une tâche primordiale au bon fonctionnement de tout système de traitement automatique des images de la rétine. Le rôle de cette étape est de classifier les images acquises selon leurs qualités, et demander une nouvelle acquisition en cas d’image de mauvaise qualité. Plusieurs algorithmes pour évaluer la qualité des images rétiniennes ont été proposés dans la littérature. Cependant, même si l'accélération de cette tâche est requise en particulier pour permettre la création de systèmes mobiles de capture d'images rétiniennes, ce sujet n'a pas encore été abordé dans la littérature. Dans cette thèse, nous ciblons un algorithme qui calcule les caractéristiques des images pour permettre leur classification en mauvaise, moyenne ou bonne qualité. Nous avons identifié le calcul des caractéristiques de l'image comme une tâche répétitive qui nécessite une accélération. Nous nous sommes intéressés plus particulièrement à l’accélération de l’algorithme d’encodage à longueur de séquence (Run-Length Matrix – RLM). Nous avons proposé une première implémentation complètement logicielle mise en œuvre sous forme d’un système embarqué basé sur la technologie Zynq de Xilinx. Pour accélérer le calcul des caractéristiques, nous avons conçu un co-processeur capable de calculer les caractéristiques en parallèle implémenté sur la logique programmable du FPGA Zynq. Nous avons obtenu une accélération de 30,1 × pour la tâche de calcul des caractéristiques de l’algorithme RLM par rapport à son implémentation logicielle sur la plateforme Zynq. La segmentation des vaisseaux sanguins rétiniens est une tâche clé dans le pipeline du traitement des images de la rétine. Les vaisseaux sanguins et leurs caractéristiques sont de bons indicateurs de la santé de la rétine. En outre, leur segmentation peut également aider à segmenter les lésions rouges, indicatrices de la rétinopathie diabétique. Plusieurs techniques de segmentation des vaisseaux sanguins rétiniens ont été proposées dans la littérature. Des architectures matérielles ont également été proposées pour accélérer certaines de ces techniques. Les architectures existantes manquent de performances et de flexibilité de programmation, notamment pour les images de haute résolution. Dans cette thèse, nous nous sommes intéressés à deux techniques de segmentation du réseau vasculaire rétinien, la technique du filtrage adapté et la technique des opérateurs de ligne. La technique de filtrage adapté a été ciblée principalement en raison de sa popularité. Pour cette technique, nous avons proposé deux architectures différentes, une architecture matérielle personnalisée mise en œuvre sur FPGA et une architecture basée sur un ASIP. L'architecture matérielle personnalisée a été optimisée en termes de surface et de débit de traitement pour obtenir des performances supérieures par rapport aux implémentations existantes dans la littérature. Cette implémentation est plus efficace que toutes les implémentations existantes en termes de débit. Pour l'architecture basée sur un processeur à jeu d’instructions spécialisé (Application-Specific Instruction-set Processor – ASIP), nous avons identifié deux goulets d'étranglement liés à l'accès aux données et à la complexité des calculs de l'algorithme. Nous avons conçu des instructions spécifiques ajoutées au chemin de données du processeur. L'ASIP a été rendu 7.7 × plus rapide par rapport à son architecture de base. La deuxième technique pour la segmentation des vaisseaux sanguins est l'algorithme détecteur de ligne multi-échelle (Multi-Scale Ligne Detector – MSLD). L'algorithme MSLD est choisi en raison de ses performances et de son potentiel à détecter les petits vaisseaux sanguins. Cependant, l'algorithme fonctionne en multi-échelle, ce qui rend l’algorithme gourmand en mémoire. Pour résoudre ce problème et permettre l'accélération de son exécution, nous avons proposé un algorithme efficace en terme de mémoire, conçu et implémenté sur FPGA. L'architecture proposée a réduit de façon drastique les exigences de l’algorithme en terme de mémoire en réutilisant les calculs et la co-conception logicielle/matérielle. Les deux architectures matérielles proposées pour la segmentation du réseau vasculaire rétinien ont été rendues flexibles pour pouvoir traiter des images de basse et de haute résolution. Ceci a été réalisé par le développement d'un compilateur spécifique capable de générer une description HDL de bas niveau de l'algorithme à partir d'un ensemble de paramètres. Le compilateur nous a permis d’optimiser les performances et le temps de développement. Dans cette thèse, nous avons introduit deux architectures qui sont, au meilleur de nos connaissances, les seules capables de traiter des images à la fois de basse et de haute résolution.----------ABSTRACT Millions of people all around the world are affected by diabetes. Several ocular complications such as diabetic retinopathy are caused by diabetes, which can lead to irreversible vision loss or even blindness if not treated. Regular comprehensive eye exams by eye doctors are required to detect the diseases at earlier stages and permit their treatment. As a preventing solution, a screening protocol involving the use of digital fundus images was adopted. This allows eye doctors to monitor changes in the retina to detect any presence of eye disease. This solution made regular examinations widely available, even to populations in remote and underserved areas. With the resulting large amount of retinal images, automated techniques to process them are required. Automated eye detection techniques are largely addressed by the research community, and now they reached a high level of maturity, which allows the deployment of telemedicine solutions. In this thesis, we are addressing the problem of processing a high volume of retinal images in a reasonable time. This is mandatory to allow the practical use of the developed techniques in a clinical context. In this thesis, we focus on two steps of the retinal image pipeline. The first step is the retinal image quality assessment. The second step is the retinal blood vessel segmentation. The evaluation of the quality of the retinal images after acquisition is a primary task for the proper functioning of any automated retinal image processing system. The role of this step is to classify the acquired images according to their quality, which will allow an automated system to request a new acquisition in case of poor quality image. Several algorithms to evaluate the quality of retinal images were proposed in the literature. However, even if the acceleration of this task is required, especially to allow the creation of mobile systems for capturing retinal images, this task has not yet been addressed in the literature. In this thesis, we target an algorithm that computes image features to allow their classification to bad, medium or good quality. We identified the computation of image features as a repetitive task that necessitates acceleration. We were particularly interested in accelerating the Run-Length Matrix (RLM) algorithm. We proposed a first fully software implementation in the form of an embedded system based on Xilinx's Zynq technology. To accelerate the features computation, we designed a co-processor able to compute the features in parallel, implemented on the programmable logic of the Zynq FPGA. We achieved an acceleration of 30.1× over its software implementation for the features computation part of the RLM algorithm. Retinal blood vessel segmentation is a key task in the pipeline of retinal image processing. Blood vessels and their characteristics are good indicators of retina health. In addition, their segmentation can also help to segment the red lesions, indicators of diabetic retinopathy. Several techniques have been proposed in the literature to segment retinal blood vessels. Hardware architectures have also been proposed to accelerate blood vessel segmentation. The existing architectures lack in terms of performance and programming flexibility, especially for high resolution images. In this thesis, we targeted two techniques, matched filtering and line operators. The matched filtering technique was targeted mainly because of its popularity. For this technique, we proposed two different architectures, a custom hardware architecture implemented on FPGA, and an Application Specific Instruction-set Processor (ASIP) based architecture. The custom hardware architecture area and timing were optimized to achieve higher performances in comparison to existing implementations. Our custom hardware implementation outperforms all existing implementations in terms of throughput. For the ASIP based architecture, we identified two bottlenecks related to data access and computation intensity of the algorithm. We designed two specific instructions added to the processor datapath. The ASIP was made 7.7× more efficient in terms of execution time compared to its basic architecture. The second technique for blood vessel segmentation is the Multi-Scale Line Detector (MSLD) algorithm. The MSLD algorithm is selected because of its performance and its potential to detect small blood vessels. However, the algorithm works at multiple scales which makes it memory intensive. To solve this problem and allow the acceleration of its execution, we proposed a memory-efficient algorithm designed and implemented on FPGA. The proposed architecture reduces drastically the memory requirements of the algorithm by reusing the computations and SW/HW co-design. The two hardware architectures proposed for retinal blood vessel segmentation were made flexible to be able to process low and high resolution images. This was achieved by the development of a specific compiler able to generate low-level HDL descriptions of the algorithm from a set of the algorithm parameters. The compiler enabled us to optimize performance and development time. In this thesis, we introduce two novel architectures which are, to the best of our knowledge, the only ones able to process both low and high resolution images

    Verification of Control Properties in the Polyhedral Model

    Get PDF
    We propose a combination of heuristic methods to prove properties of control signals for regular systems defined by means of affine recurrence equations (AREs). We benefit from the intrinsic regularity of the polyhedral model to handle parameterized systems in a symbolic way. Despite some restrictions on the form of equations we are able to handle, our techniques apply well for a useful set of properties and led us to discover some errors in actual systems. These techniques have been implemented in the MMAlpha environment

    Data bases and data base systems related to NASA's Aerospace Program: A bibliography with indexes

    Get PDF
    This bibliography lists 641 reports, articles, and other documents introduced into the NASA scientific and technical information system during the period January 1, 1981 through June 30, 1982. The directory was compiled to assist in the location of numerical and factual data bases and data base handling and management systems
    corecore