24 research outputs found

    A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic Cost

    Get PDF
    We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and extends the stability bound on the step size. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyzes and simulation results. We show the extended stability bound of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.Comment: Submitted to IEEE Transactions on Signal Processin

    Analiza prijelazne pojave adaptivnih filtara primjenom općeg radnog okvira

    Get PDF
    Employing a recently introduced framework in which a large number of adaptive filter algorithms can be viewed as special cases, we present a generalized transient analysis. An important implication of this is that while the theoretical analysis is performed for a generic filter coefficient update equation the results are directly applicable to a large range of adaptive filter algorithms simply by specifying some parameters of this generic filter coefficient update equation. In particular we point out that theoretical learning curves for the Least Mean Square (LMS), Normalized Least Mean Square (NLMS), the Affine Projection Algorithm (APA) and its relatives, as well as the Recursive Least Squares (RLS) algorithm are obtained as special cases of a general result. Subsequently, the recently introduced Fast Euclidian Direction Search (FEDS) algorithms as well as the Pradhan-Reddy subband adaptive filter (PRSAF) are used as non-trivial examples when we demonstrate the usefulness and versatility of the proposed approach to adaptive filter transient analysis through an experimental evaluation.U radu se predstavlja poopćena analiza prijelaznih pojava adaptivnih filtara, koja se zasniva na primjeni nedavno predstavljenog radnog okvira koji velik broj raznih algoritama adaptivnih filtara promatra kao specijalne slučajeve. Važna posljedica toga je da su rezultati, iako se teoretska analiza provodi na generičkoj jednadžbi za osvježavanje koeficijenta filtra, izravno primjenjivi na razne algoritme adaptivnih filtara jednostavnom specificikacijom nekih parametara generičke jednadžbe za osvježavanje koeficijenata filtra. Posebno se naglašava da su teoretske krivulje učenja za algoritam najmanjih kvadrata (LMS), normalizirani algoritam najmanjih kvadrata (NLMS), afini projekcijski algoritam (APA) i njemu srodnih algoritama, kao i za rekurzivni algoritam najmanjih kvadrata (RLS) dobivene kao posebni slučajevi poopćenog rješenja. Potom se nedavno predstavljeni algoritmi brze euklidske usmjerene pretrage (FEDS) te Pradhan-Reddy pojasni adaptivni filtar (PRSAF) koriste kao netrivijalni primjeri za dokazivanje korisnosti i univerzalnosti predloženog pristupa analizi prijelaznih pojava adaptivnih filtara kroz eksperimentalnu evaluaciju

    Compressive Diffusion Strategies Over Distributed Networks for Reduced Communication Load

    Get PDF
    We study the compressive diffusion strategies over distributed networks based on the diffusion implementation and adaptive extraction of the information from the compressed diffusion data. We demonstrate that one can achieve a comparable performance with the full information exchange configurations, even if the diffused information is compressed into a scalar or a single bit. To this end, we provide a complete performance analysis for the compressive diffusion strategies. We analyze the transient, steady-state and tracking performance of the configurations in which the diffused data is compressed into a scalar or a single-bit. We propose a new adaptive combination method improving the convergence performance of the compressive diffusion strategies further. In the new method, we introduce one more freedom-of-dimension in the combination matrix and adapt it by using the conventional mixture approach in order to enhance the convergence performance for any possible combination rule used for the full diffusion configuration. We demonstrate that our theoretical analysis closely follow the ensemble averaged results in our simulations. We provide numerical examples showing the improved convergence performance with the new adaptive combination method.Comment: Submitted to IEEE Transactions on Signal Processin

    Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-stationary Data

    Full text link
    Adaptive networks rely on in-network and collaborative processing among distributed agents to deliver enhanced performance in estimation and inference tasks. Information is exchanged among the nodes, usually over noisy links. The combination weights that are used by the nodes to fuse information from their neighbors play a critical role in influencing the adaptation and tracking abilities of the network. This paper first investigates the mean-square performance of general adaptive diffusion algorithms in the presence of various sources of imperfect information exchanges, quantization errors, and model non-stationarities. Among other results, the analysis reveals that link noise over the regression data modifies the dynamics of the network evolution in a distinct way, and leads to biased estimates in steady-state. The analysis also reveals how the network mean-square performance is dependent on the combination weights. We use these observations to show how the combination weights can be optimized and adapted. Simulation results illustrate the theoretical findings and match well with theory.Comment: 36 pages, 7 figures, to appear in IEEE Transactions on Signal Processing, June 201

    On the Influence of Informed Agents on Learning and Adaptation over Networks

    Full text link
    Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network through their collaborations. In this work, we consider two types of agents: informed agents and uninformed agents. The former receive new data regularly and perform consultation and in-network tasks, while the latter do not collect data and only participate in the consultation tasks. We examine the performance of adaptive networks as a function of the proportion of informed agents and their distribution in space. The results reveal some interesting and surprising trade-offs between convergence rate and mean-square performance. In particular, among other results, it is shown that the performance of adaptive networks does not necessarily improve with a larger proportion of informed agents. Instead, it is established that the larger the proportion of informed agents is, the faster the convergence rate of the network becomes albeit at the expense of some deterioration in mean-square performance. The results further establish that uninformed agents play an important role in determining the steady-state performance of the network, and that it is preferable to keep some of the highly connected agents uninformed. The arguments reveal an important interplay among three factors: the number and distribution of informed agents in the network, the convergence rate of the learning process, and the estimation accuracy in steady-state. Expressions that quantify these relations are derived, and simulations are included to support the theoretical findings. We further apply the results to two models that are widely used to represent behavior over complex networks, namely, the Erdos-Renyi and scale-free models.Comment: 35 pages, 8 figure

    Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks

    Full text link
    Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.Comment: 37 pages, 7 figures, To appear in IEEE Transactions on Signal Processing, 201
    corecore