1,934 research outputs found

    A study on adaptive filtering for noise and echo cancellation.

    Get PDF
    The objective of this thesis is to investigate the adaptive filtering technique on the application of noise and echo cancellation. As a relatively new area in Digital Signal Processing (DSP), adaptive filters have gained a lot of popularity in the past several decades due to the advantages that they can deal with time-varying digital system and they do not require a priori knowledge of the statistics of the information to be processed. Adaptive filters have been successfully applied in a great many areas such as communications, speech processing, image processing, and noise/echo cancellation. Since Bernard Widrow and his colleagues introduced adaptive filter in the 1960s, many researchers have been working on noise/echo cancellation by using adaptive filters with different algorithms. Among these algorithms, normalized least mean square (NLMS) provides an efficient and robust approach, in which the model parameters are obtained on the base of mean square error (MSE). The choice of a structure for the adaptive filters also plays an important role on the performance of the algorithm as a whole. For this purpose, two different filter structures: finite impulse response (FIR) filter and infinite impulse response (IIR) filter have been studied. The adaptive processes with two kinds of filter structures and the aforementioned algorithm have been implemented and simulated using Matlab.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .J53. Source: Masters Abstracts International, Volume: 44-01, page: 0472. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    Signal processing with optical delay line filters for high bit rate transmission systems

    Get PDF
    In den letzten Jahrzehnten ist das globale Kommunikationssystem in einem immer größerem Maße ein integraler Bestandteil des täglichen Lebens geworden. Optische Kommunikationssysteme sind die technologische Basis für diese Entwicklung. Nur Fasern können die riesige benötigte Bandbreite bereitstellen. Während für die ersten optischen Übertragungssysteme die Faser als "flacher" Kanal betrachtet werden konnte, machen Wellenlängenmultiplex und steigende Übertragungsraten die Einbeziehung von immer mehr physikalischen Effekten notwendig. Bei einer Erhöhung der Kanaldatenrate auf 40 Gbit/s und mehr ist die statische Kompensation von chromatischer Dispersion nicht mehr ausreichend. Die intrinsische Toleranz der Modulationsformate gegenüber Dispersion nimmt quadratisch mit der Symbolrate ab. Daher können beispielsweise durch Umwelteinflüsse hervorgerufene Dispersionsschwankungen die Dispersionstoleranz der Modulationsformate überschreiten. Dies macht eine adaptive Dispersionskompensation notwendig, was gleichzeitig auch Dispersionsmonitoring erfordert, um den adaptiven Kompensator steuern zu können. Vorhandene Links können mit Restdispersionskompensatoren ausgestattet werden, um sie für Hochgeschwindigkeitsübertragungen zu ertüchtigen. Optische Kompensationstechniken sind unabhängig von der Kanaldatenrate. Daher wird eine Erhöhung der Datenrate problemlos unterstützt. Optische Kompensatoren können WDM-fähig gebaut werden, um mehrere Kanäle auf einmal zu entzerren. Das Buch beschäftigt sich mit optischen Delay-Line-Filtern als eine Klasse von optischen Kompensatoren. Die Filtersynthese von solchen Delay-Line-Filtern wird behandelt. Der Zusammenhang zwischen optischen Filtern und digitalen FIR-Filtern mit komplexen Koeffizienten im Zusammenhang mit kohärenter Detektion wird aufgezeigt. Iterative und analytische Methoden, die die Koeffizienten für dispersions- und dispersions-slope-kompensierende Filter produzieren, werden untersucht. Genauso wichtig wie die Kompensation von Dispersion ist die Schätzung der Dispersion eines Signals. Mit Delay-Line-Filtern können die Restseitenbänder eines Signals genutzt werden, um die Dispersion zu messen. Alternativ kann nichtlineare Detektion angewandt werden, um die Pulsverbreiterung, die hauptsächlich von der Dispersion herrührt, zu schätzen. Mit gemeinsamer Dispersionskompensation und Dispersionsmonitoring können Dispersionskompensatoren auf die Signalverzerrungen eingestellt werden. Spezielle Eigenschaften der Filter zusammen mit der analytischen Beschreibung können genutzt werden, um schnelle und zuverlässige Steueralgorithmen zur Filtereinstellung bereitzustellen. Schließlich wurden Prototypen derartiger faseroptischen Kompensatoren von chromatischer Dispersion und Dispersions-Slope hergestellt und charakterisiert. Die Einheiten und ihr Systemverhalten wird gezeigt und diskutiert.Over the course of the past decades, the global communication system has become a central part of people's everyday lives. Optical communication systems are the technological basis for this development. Only fibers can provide the huge bandwidth that is required. Where the fiber could be regarded as a flat channel for the first optical transmission systems wavelength multiplexing and increasing line rates made it necessary to take more and more physical effects into account. When the line rates are increased to 40 Gbit/s and higher static chromatic dispersion compensation is not enough. The modulation format's intrinsic tolerance for dispersion decreases quadratically with the symbol rate. Thus, environmentally induced chromatic dispersion fluctuations may exceed the dispersion tolerance of the modulation formats. This makes an adaptive dispersion compensation necessary implying also the need for a monitoring scheme to steer the adaptive compensator. Legacy links that are CD-compensated by DCFs can be upgraded with residual dispersion compensators to make them ready for high speed transmission. Optical compensation is independent from the line rate. Hence, increasing the data rates is inherently supported. Optical compensators can be built WDM ready compensating multiple channels at once. The book deals with optical delay line filters as one class of optical compensators. The filter synthesis of such delay line filters is addressed. The connection between optical filters and digital FIR filters with complex coefficients that are used in conjunction with coherent detection could be shown. Iterative and analytical methods that produce the coefficients for dispersion (and also dispersion slope) compensating filters are researched. As important as the compensation of dispersion is the estimation of the dispersion of a signal. Using delay line filters, the vestigial sidebands of a signal can be used to measure the dispersion. Alternatively, nonlinear detection can be used to estimate the pulse broadening which is caused mainly by dispersion. With dispersion compensation and dispersion monitoring, dispersion compensators can be adapted to the signal's impairment. Special properties of the filter in conjunction with an analytical description can be used to provide a fast and reliable control algorithm for setting the filter to a given dispersion and centering it on a signal. Finally, prototypes of such fiber optic chromatic dispersion and dispersion slope compensation filters were manufactured and characterized. The device and system characterization of the prototypes is presented and discussed

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Contribution to Efficient Use of Narrowband Radio Channel

    Get PDF
    Předkládaná práce se soustředí na problematiku využívání úzkopásmového rádiového kanálu rádiovými modemy, které jsou určené pro průmyslové aplikace pozemní pohyblivé rádiové služby, specifikované v dominantní míře Evropským standardem ETSI EN 300 113. Tato rádiová zařízení se používají v kmitočtových pásmech od 30 MHz do 1 GHz s nejčastěji přidělovanou šířkou pásma 25 kHz a ve většině svých instalací jsou využívána ve fixních nebo mobilních bezdrátových sítích. Mezi typické oblasti použití patří zejména datová telemetrie, aplikace typu SCADA, nebo monitorování transportu strategických surovin. Za hlavní znaky popisovaného systému lze označit komunikační pokrytí značných vzdáleností, dané především vysokou výkonovou účinnosti datového přenosu a využívaní efektivních přístupových technik na rádiový kanál se semiduplexním komunikačním režimem. Striktní požadavky na elektromagnetickou kompatibilitu umožňují těmto zařízením využívat spektrum i v oblastech kmitočtově blízkým jiným komunikačním systémům bez nutnosti vkládání dodatečných ochranných frekvenčních pásem. Úzkopásmové rádiové komunikační systémy, v současnosti používají převážně exponenciální digitální modulace s konstantní modulační obálkou zejména z důvodů velice striktních omezení pro velikost výkonu vyzářeného do sousedního kanálu. Dosahují tak pouze kompromisních hodnot komunikační účinnosti. Úpravy limitů příslušných rádiových parametrů a rychlý rozvoj prostředků číslicového zpracování signálu v nedávné době, dnes umožňují ekonomicky přijatelné využití spektrálně efektivnějších modulačních technik i v těch oblastech, kde je prioritní využívání úzkých rádiových kanálů. Cílem předkládané disertační práce je proto výzkum postupů směřující ke sjednocení výhodných vlastností lineárních a nelineárních modulací v moderní konstrukci úzkopásmového rádiového modemu. Účelem tohoto výzkumu je efektivní a „ekologické“ využívání přidělené části frekvenčního spektra. Mezi hlavní dílčí problémy, jimiž se předkládaná práce zabývá, lze zařadit zejména tyto: Nyquistova modulační filtrace, navrhovaná s ohledem na minimalizaci nežádoucích elektromagnetických interferencí, efektivní číslicové algoritmy frekvenční demodulace a rychlé rámcové a symbolové synchronizace. Součástí práce je dále analýza navrhovaného řešení z pohledu celkové konstrukce programově definovaného rádiového modemu v rovině simulací při vyšetřování robustnosti datového přenosu rádiovým kanálem s bílým Gaussovským šumem nebo kanálem s únikem v důsledku mnohacestného šíření signálu. Závěr práce je pak zaměřen na prezentování výsledků praktické části projektu, v níž byly testovány, měřeny a analyzovány dvě prototypové konstrukce rádiového zařízení. Tato finální část práce obsahuje i praktická doporučení, vedoucí k vyššímu stupni využitelnosti spektrálně efektivnějších komunikačních režimů v oblasti budoucí generace úzkopásmových zařízení pozemní pohyblivé rádiové služby.he industrial narrowband land mobile radio (LMR) devices, as considered in this dissertation project, has been subject to European standard ETSI EN 300 113. The system operates on frequencies between 30 MHz and 1 GHz, with channel separations of up to 25 kHz, and is intended for private, fixed, or mobile, radio packet switching networks. Data telemetry, SCADA, maritime and police radio services; traffic monitoring; gas, water, and electricity producing factories are the typical system applications. Long distance coverage, high power efficiency, and efficient channel access techniques in half duplex operation are the primary advantages the system relays on. Very low level of adjacent channel power emissions and robust radio receiver architectures, with high dynamic range, enable for a system’s coexistence with various communication standards, without the additional guard band frequency intervals. On the other hand, the strict limitations of the referenced standard as well as the state of the technology, has hindered the increase in communication efficiency, with which the system has used its occupied bandwidth. New modifications and improvements are needed to the standard itself and to the up-to-date architectures of narrowband LMR devices, to make the utilization of more efficient modes of system operation practically realizable. The main objective of this dissertation thesis is therefore to find a practical way how to combine the favorable properties of the advanced nonlinear and linear digital modulation techniques in a single digital modem solution, in order to increase the efficiency of the narrowband radio channel usage allocated to the new generation of the industrial LMR devices. The main attention is given to the particular areas of digital modem design such as proposal of the new family of the Nyquist filters minimizing the adjacent channel interference, design and analysis of the efficient algorithms for frequency discrimination, fast frame and symbol

    The application of genetic algorithms to the adaptation of IIR filters

    Get PDF
    The adaptation of an IIR filter is a very difficult problem due to its non-quadratic performance surface and potential instability. Conventional adaptive IIR algorithms suffer from potential instability problems and a high cost for stability monitoring. Therefore, there is much interest in adaptive IIR filters based on alternative algorithms. Genetic algorithms are a family of search algorithms based on natural selection and genetics. They have been successfully used in many different areas. Genetic algorithms applied to the adaptation of IIR filtering problems are studied in this thesis, and show that the genetic algorithm approach has a number of advantages over conventional gradient algorithms, particularly, for the adaptation of high order adaptive IIR filters, IIR filters with poles close to the unit circle and IIR filters with multi-modal error surfaces. The conventional gradient algorithms have difficulty solving these problems. Coefficient results are presented for various orders of IIR filters in this thesis. In the computer simulations presented in this thesis, the direct, cascade, parallel and lattice form IIR filter structures have been used and compared. The lattice form IIR filter structure shows its superiority over the cascade and parallel form IIR filter structures in terms of its mean square error convergence performance

    Efficient Adaptive Filter Algorithms Using Variable Tap-length Scheme

    Get PDF
    Today the usage of digital signal processors has increased, where adaptive filter algorithms are now routinely employed in mostly all contemporary devices such as mobile phones, camcorders, digital cameras, and medical monitoring equipment, to name few. The filter tap-length, or the number of taps, is a significant structural parameter of adaptive filters that can influences both the complexity and steady-state performance characteristics of the filter. Traditional implementation of adaptive filtering algorithms presume some fixed filter-length and focus on estimating variable filter\u27s tap-weights parameters according to some pre-determined cost function. Although this approach can be adequate in some applications, it is not the case in more complicated ones as it does not answer the question of filter size (tap-length). This problem can be more apparent when the application involves a change in impulse response, making it hard for the adaptive filter algorithm to achieve best potential performance. A cost-effective approach is to come up with variable tap-length filtering scheme that can search for the optimal length while the filter is adapting its coefficients. In direct form structure filtering, commonly known as a transversal adaptive filter, several schemes were used to estimate the optimum tap-length. Among existing algorithms, pseudo fractional tap-length (FT) algorithm, is of particular interest because of its fast convergence rate and small steady-state error. Lattice structured adaptive filters, on the other hand, have attracted attention recently due to a number of desirable properties. The aim of this research is to develop efficient adaptive filter algorithms that fill the gap where optimal filter structures were not proposed by incorporating the concept of pseudo fractional tap-length (FT) in adaptive filtering algorithms. The contribution of this research include the development of variable length adaptive filter scheme and hence optimal filter structure for the following applications: (1) lattice prediction; (2) Least-Mean-Squares (LMS) lattice system identification; (3) Recursive Least-Squares (RLS) lattice system identification; (4) Constant Modulus Algorithm (CMA) blind equalization. To demonstrate the capability of proposed algorithms, simulations examples are implemented in different experimental conditions, where the results showed noticeable improvement in the context of mean square Error (MSE), as well as in the context of convergence rate of the proposed algorithms with their counterparts adaptive filter algorithms. Simulation results have also proven that with affordable extra computational complexity, an optimization for both of the adaptive filter coefficients and the filter tap-length can be attained

    Evolvable hardware platform for fault-tolerant reconfigurable sensor electronics

    Get PDF

    Digital Filters and Signal Processing

    Get PDF
    Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability. This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide
    corecore