10,745 research outputs found

    A Cost-based Optimizer for Gradient Descent Optimization

    Full text link
    As the use of machine learning (ML) permeates into diverse application domains, there is an urgent need to support a declarative framework for ML. Ideally, a user will specify an ML task in a high-level and easy-to-use language and the framework will invoke the appropriate algorithms and system configurations to execute it. An important observation towards designing such a framework is that many ML tasks can be expressed as mathematical optimization problems, which take a specific form. Furthermore, these optimization problems can be efficiently solved using variations of the gradient descent (GD) algorithm. Thus, to decouple a user specification of an ML task from its execution, a key component is a GD optimizer. We propose a cost-based GD optimizer that selects the best GD plan for a given ML task. To build our optimizer, we introduce a set of abstract operators for expressing GD algorithms and propose a novel approach to estimate the number of iterations a GD algorithm requires to converge. Extensive experiments on real and synthetic datasets show that our optimizer not only chooses the best GD plan but also allows for optimizations that achieve orders of magnitude performance speed-up.Comment: Accepted at SIGMOD 201

    Neo: A Learned Query Optimizer

    Full text link
    Query optimization is one of the most challenging problems in database systems. Despite the progress made over the past decades, query optimizers remain extremely complex components that require a great deal of hand-tuning for specific workloads and datasets. Motivated by this shortcoming and inspired by recent advances in applying machine learning to data management challenges, we introduce Neo (Neural Optimizer), a novel learning-based query optimizer that relies on deep neural networks to generate query executions plans. Neo bootstraps its query optimization model from existing optimizers and continues to learn from incoming queries, building upon its successes and learning from its failures. Furthermore, Neo naturally adapts to underlying data patterns and is robust to estimation errors. Experimental results demonstrate that Neo, even when bootstrapped from a simple optimizer like PostgreSQL, can learn a model that offers similar performance to state-of-the-art commercial optimizers, and in some cases even surpass them

    Blow-up profile of rotating 2D focusing Bose gases

    Full text link
    We consider the Gross-Pitaevskii equation describing an attractive Bose gas trapped to a quasi 2D layer by means of a purely harmonic potential, and which rotates at a fixed speed of rotation Ω\Omega. First we study the behavior of the ground state when the coupling constant approaches a_∗a\_* , the critical strength of the cubic nonlinearity for the focusing nonlinear Schr{\"o}dinger equation. We prove that blow-up always happens at the center of the trap, with the blow-up profile given by the Gagliardo-Nirenberg solution. In particular, the blow-up scenario is independent of Ω\Omega, to leading order. This generalizes results obtained by Guo and Seiringer (Lett. Math. Phys., 2014, vol. 104, p. 141--156) in the non-rotating case. In a second part we consider the many-particle Hamiltonian for NN bosons, interacting with a potential rescaled in the mean-field manner −−a_NN2β−−1w(Nβx),with--a\_N N^{2\beta--1} w(N^{\beta} x), with wapositivefunctionsuchthat a positive function such that \int\_{\mathbb{R}^2} w(x) dx = 1.Assumingthat. Assuming that \beta < 1/2andthat and that a\_N \to a\_*sufficientlyslowly,weprovethatthemany−bodysystemisfullycondensedontheGross−Pitaevskiigroundstateinthelimit sufficiently slowly, we prove that the many-body system is fully condensed on the Gross-Pitaevskii ground state in the limit N \to \infty$

    A Multi Hidden Recurrent Neural Network with a Modified Grey Wolf Optimizer

    Full text link
    Identifying university students' weaknesses results in better learning and can function as an early warning system to enable students to improve. However, the satisfaction level of existing systems is not promising. New and dynamic hybrid systems are needed to imitate this mechanism. A hybrid system (a modified Recurrent Neural Network with an adapted Grey Wolf Optimizer) is used to forecast students' outcomes. This proposed system would improve instruction by the faculty and enhance the students' learning experiences. The results show that a modified recurrent neural network with an adapted Grey Wolf Optimizer has the best accuracy when compared with other models.Comment: 34 pages, published in PLoS ON

    Systems and controls laboratory

    Get PDF
    Advanced aerospace systems and control including thrust modulation, optimizer research, fluidic devices, hydraulic jet valves, and related researc
    • …
    corecore