51 research outputs found

    Digital Filter Design Using Improved Teaching-Learning-Based Optimization

    Get PDF
    Digital filters are an important part of digital signal processing systems. Digital filters are divided into finite impulse response (FIR) digital filters and infinite impulse response (IIR) digital filters according to the length of their impulse responses. An FIR digital filter is easier to implement than an IIR digital filter because of its linear phase and stability properties. In terms of the stability of an IIR digital filter, the poles generated in the denominator are subject to stability constraints. In addition, a digital filter can be categorized as one-dimensional or multi-dimensional digital filters according to the dimensions of the signal to be processed. However, for the design of IIR digital filters, traditional design methods have the disadvantages of easy to fall into a local optimum and slow convergence. The Teaching-Learning-Based optimization (TLBO) algorithm has been proven beneficial in a wide range of engineering applications. To this end, this dissertation focusses on using TLBO and its improved algorithms to design five types of digital filters, which include linear phase FIR digital filters, multiobjective general FIR digital filters, multiobjective IIR digital filters, two-dimensional (2-D) linear phase FIR digital filters, and 2-D nonlinear phase FIR digital filters. Among them, linear phase FIR digital filters, 2-D linear phase FIR digital filters, and 2-D nonlinear phase FIR digital filters use single-objective type of TLBO algorithms to optimize; multiobjective general FIR digital filters use multiobjective non-dominated TLBO (MOTLBO) algorithm to optimize; and multiobjective IIR digital filters use MOTLBO with Euclidean distance to optimize. The design results of the five types of filter designs are compared to those obtained by other state-of-the-art design methods. In this dissertation, two major improvements are proposed to enhance the performance of the standard TLBO algorithm. The first improvement is to apply a gradient-based learning to replace the TLBO learner phase to reduce approximation error(s) and CPU time without sacrificing design accuracy for linear phase FIR digital filter design. The second improvement is to incorporate Manhattan distance to simplify the procedure of the multiobjective non-dominated TLBO (MOTLBO) algorithm for general FIR digital filter design. The design results obtained by the two improvements have demonstrated their efficiency and effectiveness

    Model-based Analysis and Processing of Speech and Audio Signals

    Get PDF

    Digital Filter Design Using Improved Artificial Bee Colony Algorithms

    Get PDF
    Digital filters are often used in digital signal processing applications. The design objective of a digital filter is to find the optimal set of filter coefficients, which satisfies the desired specifications of magnitude and group delay responses. Evolutionary algorithms are population-based meta-heuristic algorithms inspired by the biological behaviors of species. Compared to gradient-based optimization algorithms such as steepest descent and Newton’s like methods, these bio-inspired algorithms have the advantages of not getting stuck at local optima and being independent of the starting point in the solution space. The limitations of evolutionary algorithms include the presence of control parameters, problem specific tuning procedure, premature convergence and slower convergence rate. The artificial bee colony (ABC) algorithm is a swarm-based search meta-heuristic algorithm inspired by the foraging behaviors of honey bee colonies, with the benefit of a relatively fewer control parameters. In its original form, the ABC algorithm has certain limitations such as low convergence rate, and insufficient balance between exploration and exploitation in the search equations. In this dissertation, an ABC-AMR algorithm is proposed by incorporating an adaptive modification rate (AMR) into the original ABC algorithm to increase convergence rate by adjusting the balance between exploration and exploitation in the search equations through an adaptive determination of the number of parameters to be updated in every iteration. A constrained ABC-AMR algorithm is also developed for solving constrained optimization problems.There are many real-world problems requiring simultaneous optimizations of more than one conflicting objectives. Multiobjective (MO) optimization produces a set of feasible solutions called the Pareto front instead of a single optimum solution. For multiobjective optimization, if a decision maker’s preferences can be incorporated during the optimization process, the search process can be confined to the region of interest instead of searching the entire region. In this dissertation, two algorithms are developed for such incorporation. The first one is a reference-point-based MOABC algorithm in which a decision maker’s preferences are included in the optimization process as the reference point. The second one is a physical-programming-based MOABC algorithm in which physical programming is used for setting the region of interest of a decision maker. In this dissertation, the four developed algorithms are applied to solve digital filter design problems. The ABC-AMR algorithm is used to design Types 3 and 4 linear phase FIR differentiators, and the results are compared to those obtained by the original ABC algorithm, three improved ABC algorithms, and the Parks-McClellan algorithm. The constrained ABC-AMR algorithm is applied to the design of sparse Type 1 linear phase FIR filters of filter orders 60, 70 and 80, and the results are compared to three state-of-the-art design methods. The reference-point-based multiobjective ABC algorithm is used to design of asymmetric lowpass, highpass, bandpass and bandstop FIR filters, and the results are compared to those obtained by the preference-based multiobjective differential evolution algorithm. The physical-programming-based multiobjective ABC algorithm is used to design IIR lowpass, highpass and bandpass filters, and the results are compared to three state-of-the-art design methods. Based on the obtained design results, the four design algorithms are shown to be competitive as compared to the state-of-the-art design methods

    Sparsity in Linear Predictive Coding of Speech

    Get PDF
    nrpages: 197status: publishe

    Novel LDPC coding and decoding strategies: design, analysis, and algorithms

    Get PDF
    In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues. Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension. Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process. Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity

    커널에 의한 비근접 부분영상과 저차수 영상을 이용한 영상 선명화 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 유석인.Blind image deblurring aims to restore a high-quality image from a blurry image. Blind image deblurring has gained considerable attention in recent years because it involves many challenges in problem formulation, regularization, and optimization. In optimization perspective, blind image deblurring is a severely ill-posed inverse problemtherefore, effective regularizations are required in order to obtain a high-quality latent image from a single blurred one. In this paper, we propose nonlocal regularizations to improve blind image deblurring. First, we propose to use the nonlocal patches selected by similarity weighted by the kernel for the next blur-kernel estimation. Using these kernel-guided nonlocal patches, we impose regularization that nonlocal patches would produce the similar values by convolution. Imposing this regularization improves the kernel estimation. Second, we propose to use a nonlocal low-rank image obtained from the composition of nonlocal similar patches. Using this nonlocal low-rank image, we impose regularization that the latent image is similar to this nonlocal low-rank image. A nonlocal low-rank image contains less noise by its intrinsic property. Imposing this regularization improves the estimation of the latent image with less noise. We evaluated our method quantitatively and qualitatively by comparing several conventional blind deblurring methods. For the quantitative evaluation, we computed the sum of squared error, peak signal-to-noise ratio, and structural similarity index. For blurry images without noise, our method was generally superior to the other methods. Especially, the results of ours were sharper on structures and smoother on flat regions. For blurry and noisy images, our method highly outperformed the conventional methods. Most of other methods could not successfully estimate the blur-kernel, and the image blur was not removed. On the other hand, our method successfully estimate the blur-kernel by overcoming the noise and restored a high-quality of deblurred image with less noise.Chapter 1 Introduction 1 1.1 Formulation of the Blind Image Deblurring 2 1.2 Approach 4 1.2.1 The Use of Kernel-guided Nonlocal Patches 4 1.2.2 The Use of Nonlocal Low-rank Images 5 1.3 Overview 5 Chapter 2 Related Works 6 2.1 Natural Image Prior 7 2.1.1 Scale Mixture of Gaussians 8 2.1.2 Hyper-Laplacian Distribution 8 2.2 Avoiding No-blur Solution 10 2.2.1 Marginalization over Possible Images 11 2.2.2 Normalization of l1 by l2 13 2.2.3 Alternating I and k Approach 15 2.3 Sparse Representation 17 2.4 Using Sharp Edges 19 2.5 Handling Noise 20 Chapter 3 Preliminary: Optimization 24 3.1 Iterative Reweighted Least Squares (IRLS) 25 3.1.1 Least Squared Error Approximation 26 3.1.2 Weighted Least Squared Error Approximation 26 3.1.3 The lp Norm Approximation of Overdetermined System 27 3.1.4 The lp Norm Approximation of Underdetermined System 28 3.2 Optimization using Conjugacy 29 3.2.1 The Conjugate Direction Method 30 3.2.2 The Conjugate Gradient Method 33 3.3 The Singular Value Thresholding Algorithm 36 Chapter 4 Extracting Salient Structures 39 4.1 Structure-Texture Decomposition with Uniform Edge Map 39 4.2 Structure-Texture Decomposition with Adaptive Edge Map 41 4.3 Enhancing Structures and Producing Salient Edges 43 4.4 Analysis on the Method of Extracting Salient Edges 44 Chapter 5 Blind Image Deblurring using Nonlocal Patches 46 5.1 Estimating a Blur-kernel using Kernel-guided Nonlocal Patches 47 5.1.1 Sparse Prior 48 5.1.2 Continuous Prior 48 5.1.3 Nonlocal Prior by Kernel-guided Nonlocal Patches 49 5.2 Estimating an Interim Image using Nonlocal Low-rank Images 52 5.2.1 Nonlocal Low-rank Prior 52 5.3 Multiscale Implementation 55 5.4 Latent Image Estimation 56 Chapter 6 Experimental Results 58 6.1 Images with Ground Truth 61 6.2 Images without Ground Truth 105 6.3 Analysis on Preprocessing using Denoising 111 6.4 Analysis on the Size of Nonlocal Patches 121 6.5 Time Performance 125 Chapter 7 Conclusion 126 Bibliography 129 요약 140Docto

    Joint Channel Estimation and Detection for Multi-Carrier MIMO Communications

    Get PDF
    In MIMO OFDM systems, channel estimation and detection are very important. Pilot-based channel estimation using BEMs is widely used for approximating time-frequency variations of doubly-selective channels. BEMs can provide high estimation performance with low computational load. Data-aided channel estimation outperforms the pilot-based estimation. The data-aided estimation iteratively improves estimates using tentative data symbols and corresponding adaptive weights (reweighted channel estimation). These weights are computed assuming Gaussian data errors, which is inapplicable to OFDM. In this thesis, this assumption is however shown to improve the channel estimation performance. The reweighted channel estimation is shown to significantly outperform the unweighted estimation. Most often used mismatched receivers assume perfect channel estimates when detecting data symbols. However, due to limited pilot symbols and data errors, the channel estimates are imperfect, resulting in a degraded detection performance. The optimal receiver without explicit channel estimation significantly outperform mismatched receivers. However, its complexity is high. To reduce the complexity, a receiver that combines mismatched and optimal detection is proposed. The optimal detection is only applied to data symbols unreliably detected by the mismatched detector, identified using weights computed in the reweighted estimator. The channel estimator and the optimal receiver require the knowledge of channel statistics, which are unavailable and difficult to acquire. To overcome this, an adaptive regularization using the cross-validation criterion is introduced, which finds a regularization matrix providing best channel estimates. The proposed receiver has a reduced complexity than the optimal receiver and provides close-to-optimal detection performance without the knowledge of channel PDP. The adaptive regularization is extended to joint estimation of the Doppler-delay spread and channel. The Doppler and delay spread corresponding to the optimal regularization are selected as their estimates. This approach outperforms other known techniques and provides channel estimation performance close to that obtained with perfect channel statistics
    corecore