353 research outputs found

    Combinations of adaptive filters

    Get PDF
    Adaptive filters are at the core of many signal processing applications, ranging from acoustic noise supression to echo cancelation [1], array beamforming [2], channel equalization [3], to more recent sensor network applications in surveillance, target localization, and tracking. A trending approach in this direction is to recur to in-network distributed processing in which individual nodes implement adaptation rules and diffuse their estimation to the network [4], [5].The work of Jerónimo Arenas-García and Luis Azpicueta-Ruiz was partially supported by the Spanish Ministry of Economy and Competitiveness (under projects TEC2011-22480 and PRI-PIBIN-2011-1266. The work of Magno M.T. Silva was partially supported by CNPq under Grant 304275/2014-0 and by FAPESP under Grant 2012/24835-1. The work of Vítor H. Nascimento was partially supported by CNPq under grant 306268/2014-0 and FAPESP under grant 2014/04256-2. The work of Ali Sayed was supported in part by NSF grants CCF-1011918 and ECCS-1407712. We are grateful to the colleagues with whom we have shared discussions and coauthorship of papers along this research line, especially Prof. Aníbal R. Figueiras-Vidal

    A BLIND DECISION FEEDBACK EQUALIZER WITH EFFICIENT STRUCTURE-CRITERION SWITCHING CONTROL

    Get PDF
    This paper considers and proposes an innovated method of structure-criterion switching control for the self-optimized blind decision feedback equalizer (DFE) scheme which operates by switching between adaptation modes according to the mean square error (MSE) convergence state. The new switching control shortens the blind acquisition period time of the DFE and, consequently, speeds up its effective convergence rate. The switching control is based on the variable switching threshold which combines the commonly used MSE estimate of the DFE’s output and a posteriori error of the all-pole whitener performing front-end amplitude equalization during the blind operation mode. The efficiency of the DFE switching control is verified by simulations of single-carrier system transmitting QAM signals over multipath channels

    Steady-State Performance of an Adaptive Combined MISO Filter Using the Multichannel Affine Projection Algorithm

    Get PDF
    The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.The work of Danilo Comminiello, Michele Scarpiniti and Aurelio Uncini has been supported by the project: “Vehicular Fog energy-efficient QoS mining and dissemination of multimedia Big Data streams (V-FoG and V-Fog2)”, funded by Sapienza University of Rome Bando 2016 and 2017. The work of Michele Scarpiniti and Aurelio Uncini has been also supported by the project: “GAUChO – A Green Adaptive Fog Computing and networking Architectures” funded by the MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2015, grant 2015YPXH4W_004. The work of Luis A. Azpicueta-Ruiz is partially supported by the Spanish Ministry of Economy and Competitiveness (under grant DAMA (TIN2015-70308-REDT) and grants TEC2014-52289-R and TEC2017-83838-R), and by the European Union

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Adaptive Algorithms for Intelligent Acoustic Interfaces

    Get PDF
    Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications. One of the main feature of immersive communications is the distant-talking, i.e. the hands-free (in the broad sense) speech communications without bodyworn or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms. The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms. In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals. As regards linear adaptive algorithms, a class of adaptive filters based on the sparse nature of the acoustic impulse response has been recently proposed. We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature. On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel. Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications

    Adaptive Algorithms for Intelligent Acoustic Interfaces

    Get PDF
    Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications. One of the main feature of immersive communications is the distant-talking, i.e. the hands-free (in the broad sense) speech communications without bodyworn or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms. The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms. In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals. As regards linear adaptive algorithms, a class of adaptive filters based on the sparse nature of the acoustic impulse response has been recently proposed. We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature. On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel. Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Pilot sequence based IQ imbalance estimation and compensation

    Get PDF
    Abstract. As modern radio access technologies strive to achieve progressively higher data rates and to become increasingly more reliable, minimizing the effects of hardware imperfections becomes a priority. One of those imperfections is in-phase quadrature imbalance (IQI), caused by amplitude and phase response differences between the I and Q branches of the IQ demodulation process. IQI has been shown to deteriorate bit error rates, possibly compromise positioning performance, amongst other effects. Minimizing IQI by tightening hardware manufacturing constraints is not always a commercially viable approach, thus, baseband processing for IQI compensation provides an alternative. The thesis begins by presenting a study in IQI modeling for direct conversion receivers, we then derive a model for general imbalances and show that it reproduces the two most common models in the bibliography. We proceed by exploring some of the existing IQI compensation techniques and discussing their underlying assumptions, advantages, and possible relevant issues. A novel pilot-sequence based approach for tackling IQI estimation and compensation is introduced in this thesis. The idea is to minimize the square Frobenius norm of the error between candidate covariance matrices, which are functions of the candidate IQI parameters, and the sample covariance matrices, obtained from measurements. This new method is first presented in a positioning context with flat fading channels, where IQI compensation is used to improve the positioning estimates mean square error. The technique is then adapted to orthogonal frequency division multiplexing (OFDM) systems,including an version that exploits the 5G New Radio reference signals to estimate the IQI coefficients. We further generalize the new approach to solve joint transmitter and receiver IQI estimation and discuss the implementation details and suggested optimization techniques. The introduced methods are evaluated numerically in their corresponding chapters under a set of different conditions, such as varying signal-to-noise ratio, pilot sequence length, channel model, number of subcarriers, etc. Finally, the proposed compensation approach is compared to other well-established methods by evaluating the bit error rate curves of 5G transmissions. We consistently show that the proposed method is capable of outperforming these other methods if the SNR and pilot sequence length values are sufficiently high. In the positioning simulations, the proposed IQI compensation method was able to improve the root mean squared error (RMSE) of the position estimates by approximately 25 cm. In the OFDM scenario, with high SNR and a long pilot sequence, the new method produced estimates with mean squared error (MSE) about a million times smaller than those from a blind estimator. In bit error rate (BER) simulations, the new method was the only compensation technique capable of producing BER curves similar to the curves without IQI in all of the studied scenarios
    corecore