203,902 research outputs found

    Analytical Formulation of the Jacobian Matrix for Non-linear Calculation of the Forced Response of Turbine Blade Assemblies with Wedge Friction Dampers

    Get PDF
    A fundamental issue in turbomachinery design is the dynamical stress assessment of turbine blades. In order to reduce stress peaks in the turbine blades at engine orders corresponding to blade natural frequencies, friction dampers are employed. Blade response calculation requires the solution of a set of non-linear equations originated by the introduction of friction damping. Such a set of non-linear equations is solved using the iterative numerical Newton-Raphson method. However, calculation of the Jacobian matrix of the system using classical numerical finite difference schemes makes frequency domain solver prohibitively expensive for structures with many contact points. Large computation time results from the evaluation of partial derivatives of the non-linear equations with respect to the displacements. In this work a methodology to compute efficiently the Jacobian matrix of a dynamic system having wedge dampers is presented. It is exact and completely analytical. The proposed methods have been successfully applied to a real intermediate pressure turbine (IPT) blade under cyclic symmetry boundary conditions with underplatform wedge dampers. Its implementation showed to be very effective, and allowed to achieve relevant time savings without loss of precision

    Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness

    Full text link
    [EN] This paper deals with the construction of numerical solutions of random hyperbolic models with a finite degree of randomness that make manageable the computation of its expectation and variance. The approach is based on the combination of the random Fourier transforms, the random Gaussian quadratures and the Monte Carlo method. The recovery of the solution of the original random partial differential problem throughout the inverse integral transform allows its numerical approximation using Gaussian quadratures involving the evaluation of the solution of the random ordinary differential problem at certain concrete values, which are approximated using Monte Carlo method. Numerical experiments illustrating the numerical convergence of the method are included.This work was partially supported by the Ministerio de Ciencia, Innovacion y Universidades Spanish grant MTM2017-89664-P.Casabán, M.; Company Rossi, R.; Jódar Sánchez, LA. (2019). Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness. Mathematics. 7(9):1-21. https://doi.org/10.3390/math7090853S12179Casabán, M.-C., Company, R., Cortés, J.-C., & Jódar, L. (2014). Solving the random diffusion model in an infinite medium: A mean square approach. Applied Mathematical Modelling, 38(24), 5922-5933. doi:10.1016/j.apm.2014.04.063Casabán, M.-C., Cortés, J.-C., & Jódar, L. (2016). Solving random mixed heat problems: A random integral transform approach. Journal of Computational and Applied Mathematics, 291, 5-19. doi:10.1016/j.cam.2014.09.021Casaban, M.-C., Cortes, J.-C., & Jodar, L. (2018). Analytic-Numerical Solution of Random Parabolic Models: A Mean Square Fourier Transform Approach. Mathematical Modelling and Analysis, 23(1), 79-100. doi:10.3846/mma.2018.006Saadatmandi, A., & Dehghan, M. (2010). Numerical solution of hyperbolic telegraph equation using the Chebyshev tau method. Numerical Methods for Partial Differential Equations, 26(1), 239-252. doi:10.1002/num.20442Weston, V. H., & He, S. (1993). Wave splitting of the telegraph equation in R 3 and its application to inverse scattering. Inverse Problems, 9(6), 789-812. doi:10.1088/0266-5611/9/6/013Jordan, P. M., & Puri, A. (1999). Digital signal propagation in dispersive media. Journal of Applied Physics, 85(3), 1273-1282. doi:10.1063/1.369258Banasiak, J., & Mika, J. R. (1998). Singularly perturbed telegraph equations with applications in the random walk theory. Journal of Applied Mathematics and Stochastic Analysis, 11(1), 9-28. doi:10.1155/s1048953398000021Kac, M. (1974). A stochastic model related to the telegrapher’s equation. Rocky Mountain Journal of Mathematics, 4(3), 497-510. doi:10.1216/rmj-1974-4-3-497Iacus, S. M. (2001). Statistical analysis of the inhomogeneous telegrapher’s process. Statistics & Probability Letters, 55(1), 83-88. doi:10.1016/s0167-7152(01)00133-xCasabán, M.-C., Cortés, J.-C., & Jódar, L. (2015). A random Laplace transform method for solving random mixed parabolic differential problems. Applied Mathematics and Computation, 259, 654-667. doi:10.1016/j.amc.2015.02.091Casabán, M.-C., Cortés, J.-C., & Jódar, L. (2018). Solving linear and quadratic random matrix differential equations using: A mean square approach. The non-autonomous case. Journal of Computational and Applied Mathematics, 330, 937-954. doi:10.1016/j.cam.2016.11.049Casabán, M.-C., Cortés, J.-C., & Jódar, L. (2016). Solving linear and quadratic random matrix differential equations: A mean square approach. Applied Mathematical Modelling, 40(21-22), 9362-9377. doi:10.1016/j.apm.2016.06.01

    Symbolic computation for evaluation of measurement uncertainty

    Get PDF
    In recent years, with the rapid development of symbolic computation, the integration of symbolic and numeric methods is increasingly applied in various applications. This paper proposed the use of symbolic computation for the evaluation of measurement uncertainty. The general method and procedure are discussed, and its great potential and powerful features for measurement uncertainty evaluation has been demonstrated through examples

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
    corecore