23 research outputs found
Generalized Minimum Error with Fiducial Points Criterion for Robust Learning
The conventional Minimum Error Entropy criterion (MEE) has its limitations,
showing reduced sensitivity to error mean values and uncertainty regarding
error probability density function locations. To overcome this, a MEE with
fiducial points criterion (MEEF), was presented. However, the efficacy of the
MEEF is not consistent due to its reliance on a fixed Gaussian kernel. In this
paper, a generalized minimum error with fiducial points criterion (GMEEF) is
presented by adopting the Generalized Gaussian Density (GGD) function as
kernel. The GGD extends the Gaussian distribution by introducing a shape
parameter that provides more control over the tail behavior and peakedness. In
addition, due to the high computational complexity of GMEEF criterion, the
quantized idea is introduced to notably lower the computational load of the
GMEEF-type algorithm. Finally, the proposed criterions are introduced to the
domains of adaptive filter, kernel recursive algorithm, and multilayer
perceptron. Several numerical simulations, which contain system identification,
acoustic echo cancellation, times series prediction, and supervised
classification, indicate that the novel algorithms' performance performs
excellently.Comment: 12 pages, 9 figure
Generalized correntropy induced metric based total least squares for sparse system identification
The total least squares (TLS) method has been successfully applied to system identification in the errors-in-variables (EIV) model, which can efficiently describe systems where input–output pairs are contaminated by noise. In this paper, we propose a new gradient-descent TLS filtering algorithm based on the generalized correntropy induced metric (GCIM), called as GCIM-TLS, for sparse system identification. By introducing GCIM as a penalty term to the TLS problem, we can achieve improved accuracy of sparse system identification. We also characterize the convergence behaviour analytically for GCIM-TLS. To reduce computational complexity, we use the first-order Taylor series expansion and further derive a simplified version of GCIM-TLS. Simulation results verify the effectiveness of our proposed algorithms in sparse system identification
Wind Power Forecasting Methods Based on Deep Learning: A Survey
Accurate wind power forecasting in wind farm can effectively reduce the enormous impact on grid operation safety when high permeability intermittent power supply is connected to the power grid. Aiming to provide reference strategies for relevant researchers as well as practical applications, this paper attempts to provide the literature investigation and methods analysis of deep learning, enforcement learning and transfer learning in wind speed and wind power forecasting modeling. Usually, wind speed and wind power forecasting around a wind farm requires the calculation of the next moment of the definite state, which is usually achieved based on the state of the atmosphere that encompasses nearby atmospheric pressure, temperature, roughness, and obstacles. As an effective method of high-dimensional feature extraction, deep neural network can theoretically deal with arbitrary nonlinear transformation through proper structural design, such as adding noise to outputs, evolutionary learning used to optimize hidden layer weights, optimize the objective function so as to save information that can improve the output accuracy while filter out the irrelevant or less affected information for forecasting. The establishment of high-precision wind speed and wind power forecasting models is always a challenge due to the randomness, instantaneity and seasonal characteristics
Study of L0-norm constraint normalized subband adaptive filtering algorithm
Limited by fixed step-size and sparsity penalty factor, the conventional
sparsity-aware normalized subband adaptive filtering (NSAF) type algorithms
suffer from trade-off requirements of high filtering accurateness and quicker
convergence behavior. To deal with this problem, this paper proposes variable
step-size L0-norm constraint NSAF algorithms (VSS-L0-NSAFs) for sparse system
identification. We first analyze mean-square-deviation (MSD) statistics
behavior of the L0-NSAF algorithm innovatively in according to a novel
recursion form and arrive at corresponding expressions for the cases that
background noise variance is available and unavailable, where correlation
degree of system input is indicated by scaling parameter r. Based on
derivations, we develop an effective variable step-size scheme through
minimizing the upper bounds of the MSD under some reasonable assumptions and
lemma. To realize performance improvement, an effective reset strategy is
incorporated into presented algorithms to tackle with non-stationary
situations. Finally, numerical simulations corroborate that the proposed
algorithms achieve better performance in terms of estimation accurateness and
tracking capability in comparison with existing related algorithms in sparse
system identification and adaptive echo cancellation circumstances.Comment: 15 pages,15 figure
Mathematics and Digital Signal Processing
Modern computer technology has opened up new opportunities for the development of digital signal processing methods. The applications of digital signal processing have expanded significantly and today include audio and speech processing, sonar, radar, and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others. This Special Issue is aimed at wide coverage of the problems of digital signal processing, from mathematical modeling to the implementation of problem-oriented systems. The basis of digital signal processing is digital filtering. Wavelet analysis implements multiscale signal processing and is used to solve applied problems of de-noising and compression. Processing of visual information, including image and video processing and pattern recognition, is actively used in robotic systems and industrial processes control today. Improving digital signal processing circuits and developing new signal processing systems can improve the technical characteristics of many digital devices. The development of new methods of artificial intelligence, including artificial neural networks and brain-computer interfaces, opens up new prospects for the creation of smart technology. This Special Issue contains the latest technological developments in mathematics and digital signal processing. The stated results are of interest to researchers in the field of applied mathematics and developers of modern digital signal processing systems
A New Class of Efficient Adaptive Filters for Online Nonlinear Modeling
Nonlinear models are known to provide excellent performance in real-world applications that often operate in nonideal conditions. However, such applications often require online processing to be performed with limited computational resources. To address this problem, we propose a new class of efficient nonlinear models for online applications. The proposed algorithms are based on linear-in-the-parameters (LIPs) nonlinear filters using functional link expansions. In order to make this class of functional link adaptive filters (FLAFs) efficient, we propose low-complexity expansions and frequency-domain adaptation of the parameters. Among this family of algorithms, we also define the partitioned-block frequency-domain FLAF (FD-FLAF), whose implementation is particularly suitable for online nonlinear modeling problems. We assess and compare FD-FLAFs with different expansions providing the best possible tradeoff between performance and computational complexity. Experimental results prove that the proposed algorithms can be considered as an efficient and effective solution for online applications, such as the acoustic echo cancellation, even in the presence of adverse nonlinear conditions and with limited availability of computational resources
Wind generation forecasting methods and proliferation of artificial neural network:A review of five years research trend
To sustain a clean environment by reducing fossil fuels-based energies and increasing the integration of renewable-based energy sources, i.e., wind and solar power, have become the national policy for many countries. The increasing demand for renewable energy sources, such as wind, has created interest in the economic and technical issues related to the integration into the power grids. Having an intermittent nature and wind generation forecasting is a crucial aspect of ensuring the optimum grid control and design in power plants. Accurate forecasting provides essential information to empower grid operators and system designers in generating an optimal wind power plant, and to balance the power supply and demand. In this paper, we present an extensive review of wind forecasting methods and the artificial neural network (ANN) prolific in this regard. The instrument used to measure wind assimilation is analyzed and discussed, accurately, in studies that were published from May 1st, 2014 to May 1st, 2018. The results of the review demonstrate the increased application of ANN into wind power generation forecasting. Considering the component limitation of other systems, the trend of deploying the ANN and its hybrid systems are more attractive than other individual methods. The review further revealed that high forecasting accuracy could be achieved through proper handling and calibration of the wind-forecasting instrument and method