5,901 research outputs found

    On local stabilities of pp-K\"ahler structures

    Full text link
    By use of a natural extension map and a power series method, we obtain a local stability theorem for p-K\"ahler structures with the (p,p+1)(p,p+1)-th mild βˆ‚βˆ‚Λ‰\partial\bar\partial-lemma under small differentiable deformations.Comment: Several typos have been fixed. Final version to appear in Compositio Mathematica. arXiv admin note: text overlap with arXiv:1609.0563

    Geometry of logarithmic forms and deformations of complex structures

    Full text link
    We present a new method to solve certain βˆ‚Λ‰\bar{\partial}-equations for logarithmic differential forms by using harmonic integral theory for currents on Kahler manifolds. The result can be considered as a βˆ‚Λ‰\bar{\partial}-lemma for logarithmic forms. As applications, we generalize the result of Deligne about closedness of logarithmic forms, give geometric and simpler proofs of Deligne's degeneracy theorem for the logarithmic Hodge to de Rham spectral sequences at E1E_1-level, as well as certain injectivity theorem on compact Kahler manifolds. Furthermore, for a family of logarithmic deformations of complex structures on Kahler manifolds, we construct the extension for any logarithmic (n,q)(n,q)-form on the central fiber and thus deduce the local stability of log Calabi-Yau structure by extending an iteration method to the logarithmic forms. Finally we prove the unobstructedness of the deformations of a log Calabi-Yau pair and a pair on a Calabi-Yau manifold by differential geometric method.Comment: Several typos have been fixed. Final version to appear in Journal of Algebraic Geometr

    Parameter incremental learning algorithm for neural networks

    Get PDF
    In this dissertation, a novel training algorithm for neural networks, named Parameter Incremental Learning (PIL), is proposed, developed, analyzed and numerically validated.;The main idea of the PIL algorithm is based on the essence of incremental supervised learning: that the learning algorithm, i.e., the update law of the network parameters, should not only adapt to the newly presented input-output training pattern, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly derived, using the first-order approximation technique, with appropriate measures of the performance of preservation and adaptation. The PIL algorithms for the Multi-Layer Perceptron (MLP) are subsequently derived by applying the general PIL algorithm, augmented with the introduction of an extra fictitious input to the neuron. The critical point in obtaining an analytical solution of the PIL algorithm for the MLP is to apply the general PIL algorithm at the neuron level instead of the global network level. The PIL algorithm is basically a stochastic learning algorithm, or on-line learning algorithm, since it adapts the neural weights each time a new training pattern is presented. Extensive numerical study for the newly developed PIL algorithm for MLP is conducted, mainly by comparing the new algorithm with the standard (on-line) Back-Propagation (BP) algorithm. The benchmark problems included in the numerical study are function approximation, classification, dynamic system modeling and neural controller. To further evaluate the performance of the proposed PIL algorithm, comparison with another well-known simplified high-order algorithm, i.e., the Stochastic Diagonal Levenberg-Marquardt (SDLM) algorithm, is also conducted.;In all the numerical studies, the new algorithm is shown to be remarkably superior to the standard online BP learning algorithm and the SDLM algorithm in terms of (1) the convergence speed, (2) the chance to get rid of the plateau area, which is a frequently encountered problem in standard BP algorithm, and (3) the chance to find a better solution.;Unlike any other advanced or high-order learning algorithms, the PIL algorithm is computationally as simple as the standard on-line BP algorithm. It is also simple to use since, like the standard BP algorithm, only a single parameter, i.e., the learning rate, needs to be tuned. In fact, the PIL algorithm looks just like a minor modification of the standard on-line BP algorithm, so it can be applied to any situations where the standard on-line BP algorithm is applicable. It can also replace the standard on-line BP algorithm already in use to get better performance, even without re-tuning of the learning rate.;The PIL algorithm is shown to have the potential to replace the standard BP algorithm and is expected to become yet another standard stochastic (or on-line) learning algorithm for MLP due to its distinguished features

    Direct writing of 40-nm features inside fused silica glass with oscillator ultrafast lasers

    Get PDF
    With ultra-fast oscillator lasers (less than 1nJ/pulse, 80MHz repetition rate), we propose that we could fabricate features with less than 40nm inside UV transparent material such as fused silica and quartz. The low threshold property of this demonstration could lower the cost of lasers, and improve the throughput of laser machining due to the quasi-CW nature of the laser used. Our initial results shows that damages are observed with threshold as low as 1nJ before the UV objective, and then size is below 1 micron
    • …
    corecore