250 research outputs found
A multiscale strategy for fouling prediction and mitigation in gas turbines
Gas turbines are one of the primary sources of power for both aerospace and land-based applications. Precisely for this reason, they are often forced to operate in harsh environmental conditions, which involve the occurrence of particle ingestion by the engine. The main implications
of this problem are often underestimated. The particulate in the airflow ingested by the machine can deposit or erode its internal surfaces, and lead to the variation of their aerodynamic geometry, entailing performance degradation and, possibly, a reduction in engine life. This issue affects the compressor and the turbine section and can occur for either land-based or aeronautical turbines. For the former, the problem can be mitigated (but not eliminated) by installing filtration systems. For what concern the aerospace field, filtration systems cannot be used. Volcanic eruptions and sand dust storms can send particulate to aircraft cruising altitudes. Also, aircraft operating in remote locations or low altitudes can be subjected to particle ingestion, especially in desert environments. The aim of this work is to propose different methodologies capable to mitigate the effects
of fouling or predicting the performance degradation that it generates. For this purpose, both hot and cold engine sections are considered. Concerning the turbine section, new design guidelines are presented. This is because, for this specific component, the time scales of failure events due to hot deposition can be of the order of minutes, which makes any predictive model inapplicable. In this respect, design optimization techniques were applied to find the best HPT vane geometry that is less sensitive to the fouling phenomena. After that, machine learning methods were adopted to obtain a design map that can be useful in the first steps of the design phase. Moreover, after a numerical uncertainty quantification
analysis, it was demonstrated that a deterministic optimization is not sufficient to face highly aleatory phenomena such as fouling. This suggests the use of robust or aggressive design techniques to front this issue. On the other hand, with respect to the compressor section, the research was mainly focused on the building of a predictive maintenance tool. This is because the time scales of failure events due to cold deposition are longer than the ones for the hot section, hence the main challenge for this component is the optimization of the washing schedule. As reported in the previous sections, there are several studies in the literature focused on this issue, but almost all of them are data-based instead of physics-based. The innovative strategy proposed here is a mixture between physics-based and data-based methodologies. In particular, a reduced-order model has been developed to predict the behaviour of the whole engine as the degradation proceeds. For this purpose, a gas path code that uses the components’ characteristic maps has been created to simulate the gas turbine. A map variation technique has been used to take into account the fouling effects on each engine component. Particularly, fouling coefficients as a function of the engine architecture, its operating conditions, and the contaminant characteristics have been created. For this purpose, both experimental and computational results have been used. Specifically for the latter, efforts have been done to develop a new numerical deposition/detachment model.Le turbine a gas sono una delle pricipali fonti di energia, sia per applicazioni aeronautiche che terrestri. Proprio per questa ragione, esse sono spesso costrette ad operare in ambienti non propriamente puliti, il che comporta l’ingestione di contaminanti solidi da parte del motore. Le principali implicazioni di questo problema sono spesso sottovalutate. Le particelle solide presenti nel flusso d’aria che il motore ingerisce durante il suo funzionamento possono depositarsi o erodere le superfici interne della macchina, e portare a variazioni alla sua aerodinamica, quindi a degrado di performance e, molto probabilmente, alla diminuzione della sua vita utile. Questo problema aflligge sia la parte del compressore che la parte della
turbina, e si manifesta sia in applicazioni terrestri che aeronautiche. Per quanto riguarda la prima, la questione può essere mitigata (ma non eliminata) dall’installazione di sistemi di filtraggio all’ingresso della macchina. Per le applicazioni aeronautiche invece, i sistemi di filtraggio non possono essere utilizzati. Questo implica che il particolato presente ad alte quote, magari grazie ad eventi catastrofici quali eruzioni vulcaniche, o a basse quote, quindi ambienti deseritic, entra liberamente nella turbina a gas.
Lo scopo principale di questo lavoro di tesi, è quello di proporre differenti metodologieallo scopo di mitigare gli effetti dello sporcamento o predirre il degrado che esso comporta nelle turbine a gas. Per questo scopo, sia la parte del compressore che quella della turbina sono state prese in considerazione. Per quanto riguarda la parte turbina, saranno presentate nuove guide progettuali volte al trovare la geometria che sia meno sensibile possibile al problema dello sporcamento. Dopo di ciò, i risultati ottenuti verranno trattati tramite tecniche di machine learning, ottenendo una mappa di progetto che potrà essere utile nelle prime fasi della progettazione di questi componenti. Inoltre, essendo l’analisi fin qui condotta di
tipo deterministico, un’analisi delle principali fonti di incertezza verrà eseguita con l’utilizzo di tecniche derivanti dall’uncertainty quantification. Questo dimostrerà che l’analisi deterministica è troppo semplificativa, e che sarebbe opportuno spingersi verso una progettazione robusta per affrontare questa tipologia di problemi. D’altro canto, per quanto concerne la parte compressore, la ricerca è stata incentrata principalmente sulla costruzione di uno strumento predittivo, questo perchè la scala temporale del degrado dovuto alla deposizione a "freddo" è molto più dilatata rispetto a quella della sezione "calda". La trategia proposta in questo lavoro di tesi è un’insieme di modelli fisici e data-driven. In particolare, si è sviluppato un modello ad ordine ridotto per la previsione del comportamento del motore soggetto a degrado dovuto all’ingestione di particolato, durante un’intera missione aerea. Per farlo, si è generato un codice cosiddetto gas-path, che modella i singoli componenti della macchina attraverso le loro mappe caratteristiche. Quest’ultime vengono modificate, a seguito della deposizione, attraverso opportuni coefficienti di degrado.
Tali coefficienti devono essere adeguatamente stimati per avere una corretta previsione degli eventi, e per fare ciò verrà proposta una strategia che comporta l’utilizzo sia di metodi sperimentali che computazionali, per la generazione di un algoritmo che avrà lo scopo di fornire come output questi coefficienti
Roadmap for optical tweezers
Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y los autores pertenecientes a la UAMOptical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space explorationEuropean Commission (Horizon 2020, Project No. 812780
Boundary integral equation methods for superhydrophobic flow and integrated photonics
This dissertation presents fast integral equation methods (FIEMs) for solving two important problems encountered in practical engineering applications.
The first problem involves the mixed boundary value problem in two-dimensional Stokes flow, which appears commonly in computational fluid mechanics. This problem is particularly relevant to the design of microfluidic devices, especially those involving superhydrophobic (SH) flows over surfaces made of composite solid materials with alternating solid portions, grooves, or air pockets, leading to enhanced slip.
The second problem addresses waveguide devices in two dimensions, governed by the Helmholtz equation with Dirichlet conditions imposed on the boundary. This problem serves as a model for photonic devices, and the systematic investigation focuses on the scattering matrix formulation, in both analysis and numerical algorithms. This research represents an important step towards achieving efficient and accurate simulations of more complex photonic devices with straight waveguides as input and output channels, and Maxwell\u27s equations in three dimensions as the governing equations.
Numerically, both problems pose significant challenges due to the following reasons. First, the problems are typically defined in infinite domains, necessitating the use of artificial boundary conditions when employing volumetric methods such as finite difference or finite element methods. Second, the solutions often exhibit singular behavior, characterized by corner singularities in the geometry or abrupt changes in boundary conditions, even when the underlying geometry is smooth. Analyzing the exact nature of these singularities at corners or transition points is extremely difficult. Existing methods often resort to adaptive refinement, resulting in large linear systems, numerical instability, low accuracy, and extensive computational costs.
Under the hood, fast integral equation methods serve as the common engine for solving both problems. First, by utilizing the constant-coefficient nature of the governing partial differential equations (PDEs) in both problems and the availability of free-space Green\u27s functions, the solutions are represented via proper combination of layer potentials. By construction, the representation satisfies the governing PDEs within the volumetric domain and appropriate conditions at infinity. The combination of boundary conditions and jump relations of the layer potentials then leads to boundary integral equations (BIEs) with unknowns defined only on the boundary. This reduces dimensionality of the problem by one in the solve phase. Second, the kernels of the layer potentials often contain logarithmic, singular, and hypersingular terms. High-order kernel-split quadratures are employed to handle these weakly singular, singular, and hypersingular integrals for self-interactions, as well as nearly weakly singular, nearly singular, and nearly hypersingular integrals for near-interactions and close evaluations. Third, the recursively compressed inverse preconditioning (RCIP) method is applied to treat the unknown singularity in the density around corners and transition points. Finally, the celebrated fast multipole method (FMM) is applied to accelerate the scheme in both the solve and evaluation phases. In summary, high-order numerical schemes of linear complexity have been developed to solve both problems often with ten digits of accuracy, as illustrated by extensive numerical examples
Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics
Three recent breakthroughs due to AI in arts and science serve as motivation:
An award winning digital image, protein folding, fast matrix multiplication.
Many recent developments in artificial neural networks, particularly deep
learning (DL), applied and relevant to computational mechanics (solid, fluids,
finite-element technology) are reviewed in detail. Both hybrid and pure machine
learning (ML) methods are discussed. Hybrid methods combine traditional PDE
discretizations with ML methods either (1) to help model complex nonlinear
constitutive relations, (2) to nonlinearly reduce the model order for efficient
simulation (turbulence), or (3) to accelerate the simulation by predicting
certain components in the traditional integration methods. Here, methods (1)
and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3)
relying on convolutional neural networks. Pure ML methods to solve (nonlinear)
PDEs are represented by Physics-Informed Neural network (PINN) methods, which
could be combined with attention mechanism to address discontinuous solutions.
Both LSTM and attention architectures, together with modern and generalized
classic optimizers to include stochasticity for DL networks, are extensively
reviewed. Kernel machines, including Gaussian processes, are provided to
sufficient depth for more advanced works such as shallow networks with infinite
width. Not only addressing experts, readers are assumed familiar with
computational mechanics, but not with DL, whose concepts and applications are
built up from the basics, aiming at bringing first-time learners quickly to the
forefront of research. History and limitations of AI are recounted and
discussed, with particular attention at pointing out misstatements or
misconceptions of the classics, even in well-known references. Positioning and
pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at
CMES-Computer Modeling in Engineering & Science
Roadmap for optical tweezers
Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.journal articl
Roadmap for Optical Tweezers 2023
Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nanoparticle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration
The Fifteenth Marcel Grossmann Meeting
The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
Mathematical Modeling of Biological Systems
Mathematical modeling is a powerful approach supporting the investigation of open problems in natural sciences, in particular physics, biology and medicine. Applied mathematics allows to translate the available information about real-world phenomena into mathematical objects and concepts. Mathematical models are useful descriptive tools that allow to gather the salient aspects of complex biological systems along with their fundamental governing laws, by elucidating the system behavior in time and space, also evidencing symmetry, or symmetry breaking, in geometry and morphology. Additionally, mathematical models are useful predictive tools able to reliably forecast the future system evolution or its response to specific inputs. More importantly, concerning biomedical systems, such models can even become prescriptive tools, allowing effective, sometimes optimal, intervention strategies for the treatment and control of pathological states to be planned. The application of mathematical physics, nonlinear analysis, systems and control theory to the study of biological and medical systems results in the formulation of new challenging problems for the scientific community. This Special Issue includes innovative contributions of experienced researchers in the field of mathematical modelling applied to biology and medicine
- …