610 research outputs found

    High Accuracy Combination Method For Solving the Systems of Nonlinear Volterra Integral and Integro-differential Equations with Weakly Singular Kernels of the Second Kind

    Get PDF
    This paper presents a high accuracy combination algorithm for solving the systems of nonlinear Volterra integral and integro-differential equations with weakly singular kernels of the second kind. Two quadrature algorithms for solving the systems are discussed, which possess high accuracy order and the asymptotic expansion of the errors. By means of combination algorithm, we may obtain a numerical solution with higher accuracy order than the original two quadrature algorithms. Moreover an a posteriori error estimation for the algorithm is derived. Both of the theory and the numerical examples show that the algorithm is effective and saves storage capacity and computational cost

    High Accuracy Combination Method for Solving the Systems of Nonlinear Volterra Integral and Integro-Differential Equations with Weakly Singular Kernels of the Second Kind

    Get PDF
    This paper presents a high accuracy combination algorithm for solving the systems of nonlinear Volterra integral and integro-differential equations with weakly singular kernels of the second kind. Two quadrature algorithms for solving the systems are discussed, which possess high accuracy order and the asymptotic expansion of the errors. By means of combination algorithm, we may obtain a numerical solution with higher accuracy order than the original two quadrature algorithms. Moreover an a posteriori error estimation for the algorithm is derived. Both of the theory and the numerical examples show that the algorithm is effective and saves storage capacity and computational cost

    An inversion method for cometary atmospheres

    Full text link
    Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and that discrepancies can be reconciled for by correcting this flux by an offset (to which the inverse Abel transform is nearly not sensitive). The A1/Siding Spring observations were obtained very shortly after the comet produced an outburst, and we show that the emission rate derived from the observed flux of CN emission at 387 nm and from the C2 emission at 514.1 nm both present an easily-identifiable shoulder that corresponds to the separation between pre- and post-outburst gas. As a general result, we show that diagnosing properties and features of the coma using the emission rate is easier than directly using the observed flux, because the Abel transform produces a smoothing that blurs the signatures left by features present in the coma. We also determine the parameters of a Haser model fitting the inverted data and fitting the line-of-sight integrated observation, for which we provide the exact analytical expression of the line-of-sight integration of the Haser model

    Sum-of-Squares approach to feedback control of laminar wake flows

    Get PDF
    A novel nonlinear feedback control design methodology for incompressible fluid flows aiming at the optimisation of long-time averages of flow quantities is presented. It applies to reduced-order finite-dimensional models of fluid flows, expressed as a set of first-order nonlinear ordinary differential equations with the right-hand side being a polynomial function in the state variables and in the controls. The key idea, first discussed in Chernyshenko et al. 2014, Philos. T. Roy. Soc. 372(2020), is that the difficulties of treating and optimising long-time averages of a cost are relaxed by using the upper/lower bounds of such averages as the objective function. In this setting, control design reduces to finding a feedback controller that optimises the bound, subject to a polynomial inequality constraint involving the cost function, the nonlinear system, the controller itself and a tunable polynomial function. A numerically tractable approach to the solution of such optimisation problems, based on Sum-of-Squares techniques and semidefinite programming, is proposed. To showcase the methodology, the mitigation of the fluctuation kinetic energy in the unsteady wake behind a circular cylinder in the laminar regime at Re=100, via controlled angular motions of the surface, is numerically investigated. A compact reduced-order model that resolves the long-term behaviour of the fluid flow and the effects of actuation, is derived using Proper Orthogonal Decomposition and Galerkin projection. In a full-information setting, feedback controllers are then designed to reduce the long-time average of the kinetic energy associated with the limit cycle. These controllers are then implemented in direct numerical simulations of the actuated flow. Control performance, energy efficiency, and physical control mechanisms identified are analysed. Key elements, implications and future work are discussed

    Novel Numerical Approaches for the Resolution of Direct and Inverse Heat Transfer Problems

    Get PDF
    This dissertation describes an innovative and robust global time approach which has been developed for the resolution of direct and inverse problems, specifically in the disciplines of radiation and conduction heat transfer. Direct problems are generally well-posed and readily lend themselves to standard and well-defined mathematical solution techniques. Inverse problems differ in the fact that they tend to be ill-posed in the sense of Hadamard, i.e., small perturbations in the input data can produce large variations and instabilities in the output. The stability problem is exacerbated by the use of discrete experimental data which may be subject to substantial measurement error. This tendency towards ill-posedness is the main difficulty in developing a suitable prediction algorithm for most inverse problems. Previous attempts to overcome the inherent instability have involved the utilization of smoothing techniques such as Tikhonov regularization and sequential function estimation (Beck’s future information method). As alternatives to the existing methodologies, two novel mathematical schemes are proposed. They are the Global Time Method (GTM) and the Function Decomposition Method (FDM). Both schemes are capable of rendering time and space in a global fashion thus resolving the temporal and spatial domains simultaneously. This process effectively treats time elliptically or as a fourth spatial dimension. AWeighted Residuals Method (WRM) is utilized in the mathematical formulation wherein the unknown function is approximated in terms of a finite series expansion. Regularization of the solution is achieved by retention of expansion terms as opposed to smoothing in the classical Tikhonov sense. In order to demonstrate the merit and flexibility of these approaches, the GTM and FDM have been applied to representative problems of direct and inverse heat transfer. Those chosen are a direct problem of radiative transport, a parameter estimation problem found in Differential Scanning Calorimetry (DSC) and an inverse heat conduction problem (IHCP). The IHCP is resolved for the cases of diagnostic deduction (discrete temperature data at the boundary) and thermal design (prescribed functional data at the boundary). Both methods are shown to provide excellent results for the conditions under which they were tested. Finally, a number of suggestions for future work are offered

    Wavelet transforms and their applications to MHD and plasma turbulence: a review

    Full text link
    Wavelet analysis and compression tools are reviewed and different applications to study MHD and plasma turbulence are presented. We introduce the continuous and the orthogonal wavelet transform and detail several statistical diagnostics based on the wavelet coefficients. We then show how to extract coherent structures out of fully developed turbulent flows using wavelet-based denoising. Finally some multiscale numerical simulation schemes using wavelets are described. Several examples for analyzing, compressing and computing one, two and three dimensional turbulent MHD or plasma flows are presented.Comment: Journal of Plasma Physics, 201

    EEG andmete analüüs ja andmepartitsioonide arendamine masinõppe algoritmidele

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneDoktoritöö käigus valmis uus meetod masinõppe andmete efektiivsemaks kasutamiseks. Klassikalises statistikas on mudelid piisavalt lihtsad, et koos eeldustega andmete kohta, saavad need öelda, kas saadud tulemused on statistiliselt olulised või mitte ehk kas andmetes üldse on signaali, mis oleks mürast erinev. Masinõppe algoritmid, nt sügavad närvivõrgud, sisaldavad sageli sadu miljoneid parameetreid, mis muudab kogu tööprotsessi loogikat. Need mudelid suudavad alati andmed 100% ära kirjeldada – sõltumata signaali olemasolust. Masinõppe keeles on see ületreenimine. Seepärast kasutatakse masinõppes statistilise olulisuse mõõtmiseks teistsugust meetodit. Nimelt pannakse osa algandmeid kõrvale, st neid ei kasutata mudeli treenimisel. Kui kasutatud andmete põhjal on parim mudel valmis tehtud, testitakse seda varem kõrvale jäänud andmete peal. Probleemiks on aga see, et masinõppe algoritmid vajavad väga palju andmeid ning kõik, mis n.ö kõrvale pannakse, läheb mudeli treenimise mõttes raisku. Teadlased on ammu otsinud viise, kuidas seda probleemi leevendada ning kasutusele on võetud mitmeid meetodeid, aga paraku on ka neil kõigil oma puudused. Näiteks ristvalideerimise korral saab kõiki andmeid väga efektiivselt kasutada, ent pole võimalik tõlgendada mudeli parameetreid. Samas kui paneme andmeid kõrvale, on meil see info küll olemas, aga mudel ise on vähemefektiivne. Doktoritöö raames leiutasime uue viisi, kuidas andmete jagamist teha. Antud meetodi puhul jäetakse samuti algul kõrvale andmete testrühm, seejärel fikseeritakse ristvalideerimist kasutades mudeli parameetrid, neid kõrvale pandud andmete peal testides tehakse seda aga mitmes jaos ning igas jaos üle jäänud andmeid kasutatakse uuesti mudeli treenimiseks. Kasutame uuesti küll kõiki andmeid, aga saavutame ka selle, et parameetrid jäävad interpreteeritavaks, nii et me teame lõpuks, kas võitis lineaarne või eksponentsiaalne mudel; kolmekihiline või neljakihiline närvivõrk. Keeruliste andmetega loodusteadustes tihti ongi just seda vaja, et teadusartikli lõpus saaks öelda, milline oli parim mudel. Samas mudeli kaalude kõiki väärtusi polegi tihtipeale vaja. Sellises olukorras on uus meetod meie teada praegu maailma kõige efektiivsem ja parem.A novel more efficient data handling method for machine learning. In classical statistics, models are rather simple and together with some assumptions about the data itself, it is possible to say if the given result is statistically significant or not. Machine learning algorithms on the other hand can have hundreds of millions of model weights. Such models can explain any data with 100% accuracy that changes the rules of the game. This issue is solved by evaluating the models on a separate test set. Some data points are not used in the model fitting phase. If the best model has been found, the quality of the model is evaluated on that test set. This method works well but it has a problem that some of the precious data is wasted for testing the model and not actually used in training. Researches have come up with many solutions to improve the efficiency of data usage. One of the main methods is called nested cross-validation that uses data very efficiently but it has a problem that it makes it very difficult to interpret model parameters. In this thesis, we invented a novel approach for data partitioning that we termed "Cross-validation and cross-testing". First, cross-validation is used on part of the data to determine and lock the model. Then testing of the model on a separate test set is performed in a novel way such that on each testing cycle, part of the data is also used in a model training phase. This gives us an improved system for using machine learning algorithms in the case where we need to interpret model parameters but not the model weights. For example, it gives us a nice possibility to be able to describe that the data has a linear relationship instead of quadratic one or that the best neural network has 5 hidden layers

    Color postprocessing for 3-dimensional finite element mesh quality evaluation and evolving graphical workstation

    Get PDF
    Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment

    Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization

    Full text link
    More and more challenging designs are required everyday in today¿s industries. The traditional trial and error procedure commonly used for mechanical parts design is not valid any more since it slows down the design process and yields suboptimal designs. For structural components, one alternative consists in using shape optimization processes which provide optimal solutions. However, these techniques require a high computational effort and require extremely efficient and robust Finite Element (FE) programs. FE software companies are aware that their current commercial products must improve in this sense and devote considerable resources to improve their codes. In this work we propose to use the Cartesian Grid Finite Element Method, cgFEM as a tool for efficient and robust numerical analysis. The cgFEM methodology developed in this thesis uses the synergy of a variety of techniques to achieve this purpose, but the two main ingredients are the use of Cartesian FE grids independent of the geometry of the component to be analyzed and an efficient hierarchical data structure. These two features provide to the cgFEM technology the necessary requirements to increase the efficiency of the cgFEM code with respect to commercial FE codes. As indicated in [1, 2], in order to guarantee the convergence of a structural shape optimization process we need to control the error of each geometry analyzed. In this sense the cgFEM code also incorporates the appropriate error estimators. These error estimators are specifically adapted to the cgFEM framework to further increase its efficiency. This work introduces a solution recovery technique, denoted as SPR-CD, that in combination with the Zienkiewicz and Zhu error estimator [3] provides very accurate error measures of the FE solution. Additionally, we have also developed error estimators and numerical bounds in Quantities of Interest based on the SPR-CD technique to allow for an efficient control of the quality of the numerical solution. Regarding error estimation, we also present three new upper error bounding techniques for the error in energy norm of the FE solution, based on recovery processes. Furthermore, this work also presents an error estimation procedure to control the quality of the recovered solution in stresses provided by the SPR-CD technique. Since the recovered stress field is commonly more accurate and has a higher convergence rate than the FE solution, we propose to substitute the raw FE solution by the recovered solution to decrease the computational cost of the numerical analysis. All these improvements are reflected by the numerical examples of structural shape optimization problems presented in this thesis. These numerical analysis clearly show the improved behavior of the cgFEM technology over the classical FE implementations commonly used in industry.Cada d'¿a dise¿nos m'as complejos son requeridos por las industrias actuales. Para el dise¿no de nuevos componentes, los procesos tradicionales de prueba y error usados com'unmente ya no son v'alidos ya que ralentizan el proceso y dan lugar a dise¿nos sub-'optimos. Para componentes estructurales, una alternativa consiste en usar procesos de optimizaci'on de forma estructural los cuales dan como resultado dise¿nos 'optimos. Sin embargo, estas t'ecnicas requieren un alto coste computacional y tambi'en programas de Elementos Finitos (EF) extremadamente eficientes y robustos. Las compa¿n'¿as de programas de EF son conocedoras de que sus programas comerciales necesitan ser mejorados en este sentido y destinan importantes cantidades de recursos para mejorar sus c'odigos. En este trabajo proponemos usar el M'etodo de Elementos Finitos basado en mallados Cartesianos (cgFEM) como una herramienta eficiente y robusta para el an'alisis num'erico. La metodolog'¿a cgFEM desarrollada en esta tesis usa la sinergia entre varias t'ecnicas para lograr este prop'osito, cuyos dos ingredientes principales son el uso de los mallados Cartesianos de EF independientes de la geometr'¿a del componente que va a ser analizado y una eficiente estructura jer'arquica de datos. Estas dos caracter'¿sticas confieren a la tecnolog'¿a cgFEM de los requisitos necesarios para aumentar la eficiencia del c'odigo cgFEM con respecto a c'odigos comerciales. Como se indica en [1, 2], para garantizar la convergencia del proceso de optimizaci'on de forma estructural se necesita controlar el error en cada geometr'¿a analizada. En este sentido el c'odigo cgFEM tambi'en incorpora los apropiados estimadores de error. Estos estimadores de error han sido espec'¿ficamente adaptados al entorno cgFEM para aumentar su eficiencia. En esta tesis se introduce un proceso de recuperaci'on de la soluci'on, llamado SPR-CD, que en combinaci'on con el estimador de error de Zienkiewicz y Zhu [3], da como resultado medidas muy precisas del error de la soluci'on de EF. Adicionalmente, tambi'en se han desarrollado estimadores de error y cotas num'ericas en Magnitudes de Inter'es basadas en la t'ecnica SPR-CD para permitir un eficiente control de la calidad de la soluci'on num'erica. Respecto a la estimaci'on de error, tambi'en se presenta un proceso de estimaci'on de error para controlar la calidad del campo de tensiones recuperado obtenido mediante la t'ecnica SPR-CD. Ya que el campo recuperado es por lo general m'as preciso y tiene un mayor orden de convergencia que la soluci'on de EF, se propone sustituir la soluci'on de EF por la soluci'on recuperada para disminuir as'¿ el coste computacional del an'alisis num'erico. Todas estas mejoras se han reflejado en esta tesis mediante ejemplos num'ericos de problemas de optimizaci'on de forma estructural. Los resultados num'ericos muestran claramente un mejor comportamiento de la tecnolog'¿a cgFEM con respecto a implementaciones cl'asicas de EF com'unmente usadas en la industria.Nadal Soriano, E. (2014). Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35620TESI
    corecore