2,384 research outputs found

    Approximation of functions of finite variation by superpositions of a Sigmoidal function

    Get PDF
    AbstractThe aim of this note is to generalize a result of Barron [1] concerning the approximation of functions, which can be expressed in terms of the Fourier transform, by superpositions of a fixed sigmoidal function. In particular, we consider functions of the type h(x) = ∫ℝd ƒ (〈t, x〉)dÎŒ(t), where ÎŒ is a finite Radon measure on ℝd and ƒ : ℝ → ℂ is a continuous function with bounded variation in ℝ We show (Theorem 2.6) that these functions can be approximated in L2-norm by elements of the set Gn = {ÎŁi=0staggeredn cig(〈ai, x〉 + bi) : aiℝd, bi, ciℝ}, where g is a fixed sigmoidal function, with the error estimated by C/n1/2, where C is a positive constant depending only on f. The same result holds true (Theorem 2.9) for f : ℝ → ℂ satisfying the Lipschitz condition under an additional assumption that ∫ℝd‖t‖ed|u(t)| >

    Prediction of unsupported excavations behaviour with machine learning techniques

    Get PDF
    Artificial intelligence and machine learning algorithms have known an increasing interest from the research community, triggering new applications and services in many domains. In geotechnical engineering, for instance, neural networks have been used to benefit from information gained at a given site in order to extract relevant constitutive soil information from field measurements [1]. The goal of this work is to use machine (supervised) learning techniques in order to predict the behaviour of a sheet pile wall excavation, minimizing a loss function that maps the input (excavation’s depth, soil’s characteristics, wall’s stiffness) to a predicted output (wall’s deflection, soil’s settlement, wall’s bending moment). Neural networks are used to do this supervised learning. A neural network is composed of neurons which apply a mathematical function on their input (see Figure 1, left) and synapses which take the output of one neuron to the input of another one. For our purpose, neural networks can be understood as a set of nonlinear functions which can be fitted to data by changing their parameters. In this work, a simple class of neural networks, called Multi-Layer Perceptron (MLP) are used. They are composed of an input layer of neurons, an output layer, and one or several middle layers (hidden layers) (see Figure 1, right). A neural network learns by adjusting the weights and biases in order to minimize a certain loss function (for instance: the mean squared error) between the desired and the predicted output. Stochastic gradient descent or one of its variations are used to adjust the parameters and the gradients are obtained through backpropagation (an efficient application of the chain rule). The interest in neural networks comes from the fact that they are universal function estimators, in the sense that they can approximate any continuous function to any precision given enough neurons. However, this can lead to over-fitting problems where the network learns the noise in the data, or worse, where they memorize by rote each sample [2]

    Approximation paper, part 1

    Get PDF
    In this paper we discuss approximations between neural nets, fuzzy expert systems, fuzzy controllers, and continuous processes
    • 

    corecore