436 research outputs found

    Distributed Signal Processing Algorithms for Wireless Networks

    Get PDF
    Distributed signal processing algorithms have become a key approach for statistical inference in wireless networks and applications such as wireless sensor networks and smart grids. It is well known that distributed processing techniques deal with the extraction of information from data collected at nodes that are distributed over a geographic area. In this context, for each specific node, a set of neighbor nodes collect their local information and transmit the estimates to a specific node. Then, each specific node combines the collected information together with its local estimate to generate an improved estimate. In this thesis, novel distributed cooperative algorithms for inference in ad hoc, wireless sensor networks and smart grids are investigated. Low-complexity and effective algorithms to perform statistical inference in a distributed way are devised. A number of innovative approaches for dealing with node failures, compression of data and exchange of information are proposed and summarized as follows: Firstly, distributed adaptive algorithms based on the conjugate gradient (CG) method for distributed networks are presented. Both incremental and diffusion adaptive solutions are considered. Secondly, adaptive link selection algorithms for distributed estimation and their application to wireless sensor networks and smart grids are proposed. Thirdly, a novel distributed compressed estimation scheme is introduced for sparse signals and systems based on compressive sensing techniques. The proposed scheme consists of compression and decompression modules inspired by compressive sensing to perform distributed compressed estimation. A design procedure is also presented and an algorithm is developed to optimize measurement matrices. Lastly, a novel distributed reduced-rank scheme and adaptive algorithms are proposed for distributed estimation in wireless sensor networks and smart grids. The proposed distributed scheme is based on a transformation that performs dimensionality reduction at each agent of the network followed by a reduced–dimension parameter vector

    Design and Implementation of Belief Propagation Symbol Detectors for Wireless Intersymbol Interference Channels

    Get PDF
    In modern wireless communication systems, intersymbol interference (ISI) introduced by frequency selective fading is one of the major impairments to reliable data communication. In ISI channels, the receiver observes the superposition of multiple delayed reflections of the transmitted signal, which will result errors in the decision device. As the data rate increases, the effect of ISI becomes severe. To combat ISI, equalization is usually required for symbol detectors. The optimal maximum-likelihood sequence estimation (MLSE) based on the Viterbi algorithm (VA) may be used to estimate the transmitted sequence in the presence of the ISI. However, the computational complexity of the MLSE increases exponentially with the length of the channel impulse response (CIR). Even in channels which do not exhibit significant time dispersion, the length of the CIR will effectively increase as the sampling rate goes higher. Thus the optimal MLSE is impractical to implement in the majority of practical wireless applications. This dissertation is devoted to exploring practically implementable symbol detectors with near-optimal performance in wireless ISI channels. Particularly, we focus on the design and implementation of an iterative detector based on the belief propagation (BP) algorithm. The advantage of the BP detector is that its complexity is solely dependent on the number of nonzero coefficients in the CIR, instead of the length of the CIR. We also extend the work of BP detector design for various wireless applications. Firstly, we present a partial response BP (PRBP) symbol detector with near-optimal performance for channels which have long spanning durations but sparse multipath structure. We implement the architecture by cascading an adaptive linear equalizer (LE) with a BP detector. The channel is first partially equalized by the LE to a target impulse response (TIR) with only a few nonzero coefficients remaining. The residual ISI is then canceled by a more sophisticated BP detector. With the cascaded LE-BP structure, the symbol detector is capable to achieve a near-optimal error rate performance with acceptable implementation complexity. Moreover, we present a pipeline high-throughput implementation of the detector for channel length 30 with quadrature phase-shift keying (QPSK) modulation. The detector can achieve a maximum throughput of 206 Mb/s with an estimated core area of 3.162 mm^{2} using 90-nm technology node. At a target frequency of 515 MHz, the dynamic power is about 1.096 W. Secondly, we investigate the performance of aforementioned PRBP detector under a more generic 3G channel rather than the sparse channel. Another suboptimal partial response maximum-likelihood (PRML) detector is considered for comparison. Similar to the PRBP detector, the PRML detector also employs a hybrid two-stage scheme, in order to allow a tradeoff between performance and complexity. In simulations, we consider a slow fading environment and use the ITU-R 3G channel models. From the numerical results, it is shown that in frequency-selective fading wireless channels, the PRBP detector provides superior performance over both the traditional minimum mean squared error linear equalizer (MMSE-LE) and the PRML detector. Due to the effect of colored noise, the PRML detector in fading wireless channels is not as effective as it is in magnetic recording applications. Thirdly, we extend our work to accommodate the application of Advanced Television Systems Committee (ATSC) digital television (DTV) systems. In order to reduce error propagation caused by the traditional decision feedback equalizer (DFE) in DTV receiver, we present an adaptive decision feedback sparsening filter BP (DFSF-BP) detector, which is another form of PRBP detector. Different from the aforementioned LE-BP structure, in the DFSF-BP scheme, the BP detector is followed by a nonlinear filter called DFSF as the partial response equalizer. In the first stage, the DFSF employs a modified feedback filter which leaves the strongest post-cursor ISI taps uncorrected. As a result, a long ISI channel is equalized to a sparse channel having only a small number of nonzero taps. In the second stage, the BP detector is applied to mitigate the residual ISI. Since the channel is typically time-varying and suffers from Doppler fading, the DFSF is adapted using the least mean square (LMS) algorithm, such that the amplitude and the locations of the nonzero taps of the equalized sparse channel appear to be fixed. As such, the channel appears to be static during the second stage of equalization which consists of the BP detector. Simulation results demonstrate that the proposed scheme outperforms the traditional DFE in symbol error rate, under both static channels and dynamic ATSC channels. Finally, we study the symbol detector design for cooperative communications, which have attracted a lot of attention recently for its ability to exploit increased spatial diversity available at distributed antennas on other nodes. A system framework employing non-orthogonal amplify-and-forward half-duplex relays through ISI channels is developed. Based on the system model, we first design and implement an optimal maximum-likelihood detector based on the Viterbi algorithm. As the relay period increases, the effective CIR between the source and the destination becomes long and sparse, which makes the optimal detector impractical to implement. In order to achieve a balance between the computational complexity and performance, several sub-optimal detectors are proposed. We first present a multitrellis Viterbi algorithm (MVA) based detector which decomposes the original trellis into multiple parallel irregular sub-trellises by investigating the dependencies between the received symbols. Although MVA provides near-optimal performance, it is not straightforward to decompose the trellis for arbitrary ISI channels. Next, the decision feedback sequence estimation (DFSE) based detector and BP-based detector are proposed for cooperative ISI channels. Traditionally these two detectors are used with fixed, static channels. In our model, however, the effective channel is periodically time-varying, even when the component channels themselves are static. Consequently, we modify these two detector to account for cooperative ISI channels. Through simulations in frequency selective fading channels, we demonstrate the uncoded performance of the DFSE detector and the BP detector when compared to the optimal MLSE detector. In addition to quantifying the performance of these detectors, we also include an analysis of the implementation complexity as well as a discussion on complexity/performance tradeoffs

    Adaptive image synthesis for compressive displays

    Get PDF
    Recent years have seen proposals for exciting new computational display technologies that are compressive in the sense that they generate high resolution images or light fields with relatively few display parameters. Image synthesis for these types of displays involves two major tasks: sampling and rendering high-dimensional target imagery, such as light fields or time-varying light fields, as well as optimizing the display parameters to provide a good approximation of the target content. In this paper, we introduce an adaptive optimization framework for compressive displays that generates high quality images and light fields using only a fraction of the total plenoptic samples. We demonstrate the framework for a large set of display technologies, including several types of auto-stereoscopic displays, high dynamic range displays, and high-resolution displays. We achieve significant performance gains, and in some cases are able to process data that would be infeasible with existing methods.University of British Columbia (UBC Four Year Doctoral Fellowship)Natural Sciences and Engineering Research Council of Canada (Postdoctoral Fellowship)United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)Alfred P. Sloan Foundation (Sloan Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)University of British Columbia (Dolby Research Chair at UBC

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis

    Recent Technological Advances in Spatial Active Noise Control Systems

    Get PDF
    This article provides a broad overview of the recent advances in the field of active noise control techniques to reduce unwanted noise over a certain spatial region of interest. Thanks to commercial and technological advances in local active noise control systems extending the size of the quiet zone seems to be a crucial step to developing the next generation of active control systems for a more personalized and quieter audio product. In this review article, the advances over the past decade the in design and development of spatial active noise control techniques to enlarge the controlled sound zone is reviewed. The focus is specifically on the adaptive control techniques and the methods proposed in the frequency domain to control the sound field. The study has paid specific attention to the most important performance measures in designing a spatial active noise control system such as convergence rate, stability and robustness of the algorithm, the size of the quiet zone and how it can be enlarged by configuring the loudspeaker and microphone array geometries. Finally, the authors will discuss the current and future challenges that should be overcome to improve the effectiveness of the recently proposed methods to expand the silence zone

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram
    • …
    corecore