474 research outputs found

    Extensions by Antiderivatives, Exponentials of Integrals and by Iterated Logarithms

    Full text link
    Let F be a characteristic zero differential field with an algebraically closed field of constants, E be a no-new-constant extension of F by antiderivatives of F and let y1, ..., yn be antiderivatives of E. The antiderivatives y1, ..., yn of E are called J-I-E antiderivatives if the derivatives of yi in E satisfies certain conditions. We will discuss a new proof for the Kolchin-Ostrowski theorem and generalize this theorem for a tower of extensions by J-I-E antiderivatives and use this generalized version of the theorem to classify the finitely differentially generated subfields of this tower. In the process, we will show that the J-I-E antiderivatives are algebraically independent over the ground differential field. An example of a J-I-E tower is extensions by iterated logarithms. We will discuss the normality of extensions by iterated logarithms and produce an algorithm to compute its finitely differentially generated subfields.Comment: 66 pages, 1 figur

    Iterated Antiderivative Extensions

    Get PDF
    Let FF be a characteristic zero differential field with an algebraically closed field of constants and let EE be a no new constants extension of FF. We say that EE is an \textsl{iterated antiderivative extension} of FF if EE is a liouvillian extension of FF obtained by adjoining antiderivatives alone. In this article, we will show that if EE is an iterated antiderivative extension of FF and KK is a differential subfield of EE that contains FF then KK is an iterated antiderivative extension of FF.Comment: 15 pages, 0 figure

    Light Field Blind Motion Deblurring

    Full text link
    We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201

    On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

    Full text link
    Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years. In this work, we theoretically study the importance of noise in the trajectories of gradient descent towards optimal solutions in multi-layer neural networks. We show that adding noise (in different ways) to a neural network while training increases the rank of the product of weight matrices of a multi-layer linear neural network. We thus study how adding noise can assist reaching a global optimum when the product matrix is full-rank (under certain conditions). We establish theoretical foundations between the noise induced into the neural network - either to the gradient, to the architecture, or to the input/output to a neural network - and the rank of product of weight matrices. We corroborate our theoretical findings with empirical results.Comment: 4 pages + 1 figure (main, excluding references), 5 pages + 4 figures (appendix

    ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

    Full text link
    Two major momentum-based techniques that have achieved tremendous success in optimization are Polyak's heavy ball method and Nesterov's accelerated gradient. A crucial step in all momentum-based methods is the choice of the momentum parameter mm which is always suggested to be set to less than 11. Although the choice of m<1m < 1 is justified only under very strong theoretical assumptions, it works well in practice even when the assumptions do not necessarily hold. In this paper, we propose a new momentum based method ADINE\textit{ADINE}, which relaxes the constraint of m<1m < 1 and allows the learning algorithm to use adaptive higher momentum. We motivate our hypothesis on mm by experimentally verifying that a higher momentum (1\ge 1) can help escape saddles much faster. Using this motivation, we propose our method ADINE\textit{ADINE} that helps weigh the previous updates more (by setting the momentum parameter >1> 1), evaluate our proposed algorithm on deep neural networks and show that ADINE\textit{ADINE} helps the learning algorithm to converge much faster without compromising on the generalization error.Comment: 8 + 1 pages, 12 figures, accepted at CoDS-COMAD 201

    Primary osteogenic sarcoma of the breast: A case report

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution Licens
    corecore