13,800 research outputs found

    Optimal Controller and Filter Realisations using Finite-precision, Floating- point Arithmetic.

    Get PDF
    The problem of reducing the fragility of digital controllers and filters implemented using finite-precision, floating-point arithmetic is considered. Floating-point arithmetic parameter uncertainty is multiplicative, unlike parameter uncertainty resulting from fixed-point arithmetic. Based on first- order eigenvalue sensitivity analysis, an upper bound on the eigenvalue perturbations is derived. Consequently, open-loop and closed-loop eigenvalue sensitivity measures are proposed. These measures are dependent upon the filter/ controller realization. Problems of obtaining the optimal realization with respect to both the open-loop and the closed-loop eigenvalue sensitivity measures are posed. The problem for the open-loop case is completely solved. Solutions for the closed-loop case are obtained using non-linear programming. The problems are illustrated with a numerical example

    Convergence Theory of Learning Over-parameterized ResNet: A Full Characterization

    Full text link
    ResNet structure has achieved great empirical success since its debut. Recent work established the convergence of learning over-parameterized ResNet with a scaling factor Ļ„=1/L\tau=1/L on the residual branch where LL is the network depth. However, it is not clear how learning ResNet behaves for other values of Ļ„\tau. In this paper, we fully characterize the convergence theory of gradient descent for learning over-parameterized ResNet with different values of Ļ„\tau. Specifically, with hiding logarithmic factor and constant coefficients, we show that for Ļ„ā‰¤1/L\tau\le 1/\sqrt{L} gradient descent is guaranteed to converge to the global minma, and especially when Ļ„ā‰¤1/L\tau\le 1/L the convergence is irrelevant of the network depth. Conversely, we show that for Ļ„>Lāˆ’12+c\tau>L^{-\frac{1}{2}+c}, the forward output grows at least with rate LcL^c in expectation and then the learning fails because of gradient explosion for large LL. This means the bound Ļ„ā‰¤1/L\tau\le 1/\sqrt{L} is sharp for learning ResNet with arbitrary depth. To the best of our knowledge, this is the first work that studies learning ResNet with full range of Ļ„\tau.Comment: 31 page

    Assessment of density functional methods with correct asymptotic behavior

    Full text link
    Long-range corrected (LC) hybrid functionals and asymptotically corrected (AC) model potentials are two distinct density functional methods with correct asymptotic behavior. They are known to be accurate for properties that are sensitive to the asymptote of the exchange-correlation potential, such as the highest occupied molecular orbital energies and Rydberg excitation energies of molecules. To provide a comprehensive comparison, we investigate the performance of the two schemes and others on a very wide range of applications, including the asymptote problems, self-interaction-error problems, energy-gap problems, charge-transfer problems, and many others. The LC hybrid scheme is shown to consistently outperform the AC model potential scheme. In addition, to be consistent with the molecules collected in the IP131 database [Y.-S. Lin, C.-W. Tsai, G.-D. Li, and J.-D. Chai, J. Chem. Phys., 2012, 136, 154109], we expand the EA115 and FG115 databases to include, respectively, the vertical electron affinities and fundamental gaps of the additional 16 molecules, and develop a new database AE113 (113 atomization energies), consisting of accurate reference values for the atomization energies of the 113 molecules in IP131. These databases will be useful for assessing the accuracy of density functional methods.Comment: accepted for publication in Phys. Chem. Chem. Phys., 46 pages, 4 figures, supplementary material include
    • ā€¦
    corecore