271 research outputs found

    Combined Message Passing Algorithms for Iterative Receiver Design in Wireless Communication Systems

    Get PDF

    Hybrid Message Passing Algorithm for Downlink FDD Massive MIMO-OFDM Channel Estimation

    Full text link
    The design of message passing algorithms on factor graphs has been proven to be an effective manner to implement channel estimation in wireless communication systems. In Bayesian approaches, a prior probability model that accurately matches the channel characteristics can effectively improve estimation performance. In this work, we study the channel estimation problem in a frequency division duplexing (FDD) downlink massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. As the prior probability, we propose the Markov chain two-state Gaussian mixture with large variance difference (TSGM-LVD) model to exploit the structured sparsity in the angle-frequency domain of the massive MIMO-OFDM channel. In addition, we present a new method to derive the hybrid message passing (HMP) rule, which can calculate the message with mixed linear and non-linear model. To the best of the authors' knowledge, we are the first to apply the HMP rule to practical communication systems, designing the HMP-TSGM-LVD algorithm under the structured turbo-compressed sensing (STCS) framework. Simulation results demonstrate that the proposed HMP-TSGM-LVD algorithm converges faster and outperforms its counterparts under a wide range of simulation settings

    Unitary Approximate Message Passing for Sparse Bayesian Learning and Bilinear Recovery

    Get PDF
    Over the past several years, the approximate message passing (AMP) algorithm has been applied to a broad range of problems, including compressed sensing (CS), robust regression, Bayesian estimation, etc. AMP was originally developed for compressed sensing based on the loopy belief propagation (BP). Compared to convex optimization based algorithms, AMP has low complexity and its performance can be rigorously characterized by a scalar state evolution (SE) in the case of a large independent and identically distributed (i.i.d.) (sub-) Gaussian matrix. AMP was then extended to solve general estimation problems with a generalized linear observation model. However, AMP performs poorly on a generic matrix such as non-zero mean, rank-deficient, correlated, or ill-conditioned matrix, resulting in divergence and degraded performance. It was discovered later that applying AMP to a unitary transform of the original model can remarkably enhance the robustness to difficult matrices. This variant is named unitary AMP (UAMP), or formally called UTAMP. In this thesis, leveraging UAMP, we propose UAMP-SBL for sparse signal recovery and Bi-UAMP for bilinear recovery, both of which inherit the low complexity and robustness of UAMP. Sparse Bayesian learning (SBL) is a powerful tool for recovering a sparse signal from noisy measurements, which finds numerous applications in various areas. As a traditional implementation of SBL, e.g., Tipping’s method, involves matrix inversion in each iteration, the computational complexity can be prohibitive for large scale problems. To circumvent this, AMP and its variants have been used as low-complexity solutions. Unfortunately, they will diverge for ‘difficult’ measurement matrices as previously mentioned. In this thesis, leveraging UAMP, a novel SBL algorithm called UAMP-SBL is proposed where UAMP is incorporated into the structured variational message passing (SVMP) to handle the most computationally intensive part of message computations. It is shown that, compared to state-of-the-art AMP based SBL algorithms, the proposed UAMP-SBL is more robust and efficient, leading to remarkably better performance. The bilinear recovery problem has many applications such as dictionary learning, selfcalibration, compressed sensing with matrix uncertainty, etc. Compared to existing nonmessage passing alternates, several AMP based algorithms have been developed to solve bilinear problems. By using UAMP, a more robust and faster approximate inference algorithm for bilinear recovery is proposed in this thesis, which is called Bi-UAMP. With the lifting approach, the original bilinear problem is reformulated as a linear one. Then, variational inference (VI), expectation propagation (EP) and BP are combined with UAMP to implement the approximate inference algorithm Bi-UAMP, where UAMP is adopted for the most computationally intensive part. It is shown that, compared to state-of-the-art bilinear recovery algorithms, the proposed Bi-UAMP is much more robust and faster, and delivers significantly better performance. Recently, UAMP has also been employed for many other applications such as inverse synthetic aperture radar (ISAR) imaging, low-complexity direction of arrival (DOA) estimation, iterative detection for orthogonal time frequency space modulation (OTFS), channel estimation for RIS-Aided MIMO communications, etc. Promising performance was achieved in all of the applications, and more applications of UAMP are expected in the future

    Multi-frame reconstruction using super-resolution, inpainting, segmentation and codecs

    Get PDF
    In this thesis, different aspects of video and light field reconstruction are considered such as super-resolution, inpainting, segmentation and codecs. For this purpose, each of these strategies are analyzed based on a specific goal and a specific database. Accordingly, databases which are relevant to film industry, sport videos, light fields and hyperspectral videos are used for the sake of improvement. This thesis is constructed around six related manuscripts, in which several approaches are proposed for multi-frame reconstruction. Initially, a novel multi-frame reconstruction strategy is proposed for lightfield super-resolution in which graph-based regularization is applied along with edge preserving filtering for improving the spatio-angular quality of lightfield. Second, a novel video reconstruction is proposed which is built based on compressive sensing (CS), Gaussian mixture models (GMM) and sparse 3D transform-domain block matching. The motivation of the proposed technique is the improvement in visual quality performance of the video frames and decreasing the reconstruction error in comparison with the former video reconstruction methods. In the next approach, student-t mixture models and edge preserving filtering are applied for the purpose of video super-resolution. Student-t mixture model has a heavy tail which makes it robust and suitable as a video frame patch prior and rich in terms of log likelihood for information retrieval. In another approach, a hyperspectral video database is considered, and a Bayesian dictionary learning process is used for hyperspectral video super-resolution. To that end, Beta process is used in Bayesian dictionary learning and a sparse coding is generated regarding the hyperspectral video super-resolution. The spatial super-resolution is followed by a spectral video restoration strategy, and the whole process leveraged two different dictionary learnings, in which the first one is trained for spatial super-resolution and the second one is trained for the spectral restoration. Furthermore, in another approach, a novel framework is proposed for replacing advertisement contents in soccer videos in an automatic way by using deep learning strategies. For this purpose, a UNET architecture is applied (an image segmentation convolutional neural network technique) for content segmentation and detection. Subsequently, after reconstructing the segmented content in the video frames (considering the apparent loss in detection), the unwanted content is replaced by new one using a homography mapping procedure. In addition, in another research work, a novel video compression framework is presented using autoencoder networks that encode and decode videos by using less chroma information than luma information. For this purpose, instead of converting Y'CbCr 4:2:2/4:2:0 videos to and from RGB 4:4:4, the video is kept in Y'CbCr 4:2:2/4:2:0 and merged the luma and chroma channels after the luma is downsampled to match the chroma size. An inverse function is performed for the decoder. The performance of these models is evaluated by using CPSNR, MS-SSIM, and VMAF metrics. The experiments reveal that, as compared to video compression involving conversion to and from RGB 4:4:4, the proposed method increases the video quality by about 5.5% for Y'CbCr 4:2:2 and 8.3% for Y'CbCr 4:2:0 while reducing the amount of computation by nearly 37% for Y'CbCr 4:2:2 and 40% for Y'CbCr 4:2:0. The thread that ties these approaches together is reconstruction of the video and light field frames based on different aspects of problems such as having loss of information, blur in the frames, existing noise after reconstruction, existing unpleasant content, excessive size of information and high computational overhead. In three of the proposed approaches, we have used Plug-and-Play ADMM model for the first time regarding reconstruction of videos and light fields in order to address both information retrieval in the frames and tackling noise/blur at the same time. In two of the proposed models, we applied sparse dictionary learning to reduce the data dimension and demonstrate them as an efficient linear combination of basis frame patches. Two of the proposed approaches are developed in collaboration with industry, in which deep learning frameworks are used to handle large set of features and to learn high-level features from the data

    Unsupervised learning in high-dimensional space

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn machine learning, the problem of unsupervised learning is that of trying to explain key features and find hidden structures in unlabeled data. In this thesis we focus on three unsupervised learning scenarios: graph based clustering with imbalanced data, point-wise anomaly detection and anomalous cluster detection on graphs. In the first part we study spectral clustering, a popular graph based clustering technique. We investigate the reason why spectral clustering performs badly on imbalanced and proximal data. We then propose the partition constrained minimum cut (PCut) framework based on a novel parametric graph construction method, that is shown to adapt to different degrees of imbalanced data. We analyze the limit cut behavior of our approach, and demonstrate the significant performance improvement through clustering and semi-supervised learning experiments on imbalanced data. [TRUNCATED

    Critical properties of disordered XY model on sparse random graphs

    Get PDF
    Questa tesi si concentra sul modello XY, il più semplice modello con spin continui, usato per descrivere diversi sistemi fisici, dai random laser ai superconduttori, dal problema della sincronizzazione ai superfluidi. Viene studiato per diverse sorgenti di disordine quenched: accoppiamenti random, campi random, o entrambi. Il modello XY viene risolto su grafi di Bethe grazie all'algoritmo di belief propagation e al metodo della cavità. Si trova che la versione discreta del modello XY, il cosiddetto clock model a Q stati, fornisce un'approssimazione affidabile ed efficiente del modello continuo con un errore che va a zero esponenzialmente in Q, fornendo così un notevole guadagno nelle simulazioni numeriche. La soluzione di bassa temperatura riserva risultati interessanti e inaspettati, essendo di gran lunga più instabile verso la rottura di simmetria delle repliche rispetto a quanto accade nei modelli discreti. Inoltre, anche il modello XY ferromagnetico in campo random mostra una fase con rottura di simmetria delle repliche, a differenza di quanto accade nel modello di Ising in campo random. Poi, vengono caratterizzate le instabilità del modello XY spin glass in campo magnetico esterno, trovando così diverse linee critiche a seconda delle simmetrie del campo esterno. Infine, vengono studiate le strutture inerente del panorama energetico del modello XY spin glass in campo random, trovando una connessione tra la localizzazione dei modi soffici studiati attraverso l'Hessiano e la rottura della simmetria delle repliche su grafi diluiti studiati attraverso l'algoritmo di belief propagation.This thesis focuses on the XY model, that is the simplest continuous spin model, used for describing numerous physical systems, from random lasers to superconductors, from synchronization problems to superfluids. It is studied for different sources of quenched disorder: random couplings, random fields, or both them. The belief propagation algorithm and the cavity method are exploited to solve the model on the sparse topology provided by Bethe lattices. It is found that the discretized version of the XY model, the so-called Q-state clock model, provides a reliable and efficient proxy for the continuous model with an error going to zero exponentially in Q, so implying a remarkable speedup in numerical simulations. Interesting results regard the low temperature solution of the spin glass XY model, which is by far more unstable toward the replica symmetry broken phase with respect to what happens in discrete models. Moreover, also the random field XY model possesses this replica symmetry broken phase, in contrast to the sparse random field Ising model. Then, instabilities of the spin glass XY model in a field are characterized, finding different critical lines according to the different symmetries of the external field. Finally, inherent structures of the energy landscape of the spin glass XY model in a random field are described, finding a connection between the localization of soft modes studied via the Hessian and the replica symmetry breaking on sparse graphs studied via belief propagation

    Computational methods to improve genome assembly and gene prediction

    Get PDF
    DNA sequencing is used to read the nucleotides composing the genetic material that forms individual organisms. As 2nd generation sequencing technologies offering high throughput at a feasible cost have matured, sequencing has permeated nearly all areas of biological research. By a combination of large-scale projects led by consortiums and smaller endeavors led by individual labs, the flood of sequencing data will continue, which should provide major insights into how genomes produce physical characteristics, including disease, and evolve. To realize this potential, computer science is required to develop the bioinformatics pipelines to efficiently and accurately process and analyze the data from large and noisy datasets. Here, I focus on two crucial bioinformatics applications: the assembly of a genome from sequencing reads and protein-coding gene prediction. In genome assembly, we form large contiguous genomic sequences from the short sequence fragments generated by current machines. Starting from the raw sequences, we developed software called Quake that corrects sequencing errors more accurately than previous programs by using coverage of k-mers and probabilistic modeling of sequencing errors. My experiments show correcting errors with Quake improves genome assembly and leads to the detection of more polymorphisms in re-sequencing studies. For post-assembly analysis, we designed a method to detect a particular type of mis-assembly where the two copies of each chromosome in diploid genomes diverge. We found thousands of examples in each of the chimpanzee, cow, and chicken public genome assemblies that created false segmental duplications. Shotgun sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to both discover unknown microbes and explore complex environments. We developed software called Scimm that clusters metagenomic sequences based on composition in an unsupervised fashion more accurately than previous approaches. Finally, we extended an approach for predicting protein-coding genes on whole genomes to metagenomic sequences by adding new discriminative features and augmenting the task with taxonomic classification and clustering of the sequences. The program, called Glimmer-MG, predicts genes more accurately than all previous methods. By adding a model for sequencing errors that also allows the program to predict insertions and deletions, accuracy significantly improves on error-prone sequences
    • …
    corecore