59,554 research outputs found

    Self-organizing nonlinear output (SONO): A neural network suitable for cloud patch-based rainfall estimation at small scales

    Get PDF
    Accurate measurement of rainfall distribution at various spatial and temporal scales is crucial for hydrological modeling and water resources management. In the literature of satellite rainfall estimation, many efforts have been made to calibrate a statistical relationship (including threshold, linear, or nonlinear) between cloud infrared (IR) brightness temperatures and surface rain rates (RR). In this study, an automated neural network for cloud patch-based rainfall estimation, entitled self-organizing nonlinear output (SONO) model, is developed to account for the high variability of cloud-rainfall processes at geostationary scales (i.e., 4 km and every 30 min). Instead of calibrating only one IR-RR function for all clouds the SONO classifies varied cloud patches into different clusters and then searches a nonlinear IR-RR mapping function for each cluster. This designed feature enables SONO to generate various rain rates at a given brightness temperature and variable rain/no-rain IR thresholds for different cloud types, which overcomes the one-to-one mapping limitation of a single statistical IR-RR function for the full spectrum of cloud-rainfall conditions. In addition, the computational and modeling strengths of neural network enable SONO to cope with the nonlinearity of cloud-rainfall relationships by fusing multisource data sets. Evaluated at various temporal and spatial scales, SONO shows improvements of estimation accuracy, both in rain intensity and in detection of rain/no-rain pixels. Further examination of the SONO adaptability demonstrates its potentiality as an operational satellite rainfall estimation system that uses the passive microwave rainfall observations from low-orbiting satellites to adjust the IR-based rainfall estimates at the resolution of geostationary satellites. Copyright 2005 by the American Geophysical Union

    Recurrence-based time series analysis by means of complex network methods

    Full text link
    Complex networks are an important paradigm of modern complex systems sciences which allows quantitatively assessing the structural properties of systems composed of different interacting entities. During the last years, intensive efforts have been spent on applying network-based concepts also for the analysis of dynamically relevant higher-order statistical properties of time series. Notably, many corresponding approaches are closely related with the concept of recurrence in phase space. In this paper, we review recent methodological advances in time series analysis based on complex networks, with a special emphasis on methods founded on recurrence plots. The potentials and limitations of the individual methods are discussed and illustrated for paradigmatic examples of dynamical systems as well as for real-world time series. Complex network measures are shown to provide information about structural features of dynamical systems that are complementary to those characterized by other methods of time series analysis and, hence, substantially enrich the knowledge gathered from other existing (linear as well as nonlinear) approaches.Comment: To be published in International Journal of Bifurcation and Chaos (2011

    Dynamic selection and estimation of the digital predistorter parameters for power amplifier linearization

    Get PDF
    © © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents a new technique that dynamically estimates and updates the coefficients of a digital predistorter (DPD) for power amplifier (PA) linearization. The proposed technique is dynamic in the sense of estimating, at every iteration of the coefficient's update, only the minimum necessary parameters according to a criterion based on the residual estimation error. At the first step, the original basis functions defining the DPD in the forward path are orthonormalized for DPD adaptation in the feedback path by means of a precalculated principal component analysis (PCA) transformation. The robustness and reliability of the precalculated PCA transformation (i.e., PCA transformation matrix obtained off line and only once) is tested and verified. Then, at the second step, a properly modified partial least squares (PLS) method, named dynamic partial least squares (DPLS), is applied to obtain the minimum and most relevant transformed components required for updating the coefficients of the DPD linearizer. The combination of the PCA transformation with the DPLS extraction of components is equivalent to a canonical correlation analysis (CCA) updating solution, which is optimum in the sense of generating components with maximum correlation (instead of maximum covariance as in the case of the DPLS extraction alone). The proposed dynamic extraction technique is evaluated and compared in terms of computational cost and performance with the commonly used QR decomposition approach for solving the least squares (LS) problem. Experimental results show that the proposed method (i.e., combining PCA with DPLS) drastically reduces the amount of DPD coefficients to be estimated while maintaining the same linearization performance.Peer ReviewedPostprint (author's final draft

    Stochastic Optimization for Deep CCA via Nonlinear Orthogonal Iterations

    Full text link
    Deep CCA is a recently proposed deep neural network extension to the traditional canonical correlation analysis (CCA), and has been successful for multi-view representation learning in several domains. However, stochastic optimization of the deep CCA objective is not straightforward, because it does not decouple over training examples. Previous optimizers for deep CCA are either batch-based algorithms or stochastic optimization using large minibatches, which can have high memory consumption. In this paper, we tackle the problem of stochastic optimization for deep CCA with small minibatches, based on an iterative solution to the CCA objective, and show that we can achieve as good performance as previous optimizers and thus alleviate the memory requirement.Comment: in 2015 Annual Allerton Conference on Communication, Control and Computin
    • 

    corecore