3,388 research outputs found
The anisotropic grain size effect on the mechanical response of polycrystals: The role of columnar grain morphology in additively manufactured metals
Additively manufactured (AM) metals exhibit highly complex microstructures,
particularly with respect to grain morphology which typically features
heterogeneous grain size distribution, anomalous and anisotropic grain shapes,
and the so-called columnar grains. In general, the conventional morphological
descriptors are not suitable to represent complex and anisotropic grain
morphology of AM microstructures. The principal aspect of microstructural grain
morphology is the state of grain boundary spacing or grain size whose effect on
the mechanical response is known to be crucial. In this paper, we formally
introduce the notions of axial grain size and grain size anisotropy as robust
morphological descriptors which can concisely represent highly complex grain
morphologies. We instantiated a discrete sample of polycrystalline aggregate as
a representative volume element (RVE) which has random crystallographic
orientation and misorientation distributions. However, the instantiated RVE
incorporates the typical morphological features of AM microstructures including
distinctive grain size heterogeneity and anisotropic grain size owing to its
pronounced columnar grain morphology. We ensured that any anisotropy arising in
the macroscopic mechanical response of the instantiated sample is mainly
associated with its underlying anisotropic grain size. The RVE was then used
for meso-scale full-field crystal plasticity simulations corresponding to
uniaxial tensile deformation along different axes via a spectral solver and a
physics-based crystal plasticity constitutive model. Through the numerical
analyses, we were able to isolate the contribution of anisotropic grain size to
the anisotropy in the mechanical response of polycrystalline aggregates,
particularly those with the characteristic complex grain morphology of AM
metals. Such a contribution can be described by an inverse square relation
Revisiting Vainshtein Screening for fast N-body simulations
We revisit a method to incorporate the Vainshtein screening mechanism in
N-body simulations proposed by R. Scoccimarro in~\cite{Scoccimarro:2009eu}. We
further extend this method to cover a subset of Horndeski theories that evade
the bound on the speed of gravitational waves set by the binary neutron star
merger GW170817. The procedure consists of the computation of an effective
gravitational coupling that is time and scale dependent, , where the scale dependence will incorporate the
screening of the fifth-force. This is a fast procedure that when contrasted to
the alternative of solving the full equation of motion for the scalar field
inside N-body codes, reduces considerably the computational time and complexity
required to run simulations. To test the validity of this approach in the
non-linear regime, we have implemented it in a COmoving Lagrangian
Approximation (COLA) N-body code, and ran simulations for two gravity models
that have full N-body simulation outputs available in the literature, nDGP and
Cubic Galileon. We validate the combination of the COLA method with this
implementation of the Vainshtein mechanism with full N-body simulations for
predicting the boost function: the ratio between the modified gravity
non-linear matter power spectrum and its General Relativity counterpart. This
quantity is of great importance for building emulators in beyond-CDM
models, and we find that the method described in this work has an agreement of
below for scales down to Mpc with respect to full N-body
simulations.Comment: 33 pages, 13 figures and 9 tables. JCAP accepted versio
Deciphering Radio Emission from Solar Coronal Mass Ejections using High-fidelity Spectropolarimetric Radio Imaging
Coronal mass ejections (CMEs) are large-scale expulsions of plasma and
magnetic fields from the Sun into the heliosphere and are the most important
driver of space weather. The geo-effectiveness of a CME is primarily determined
by its magnetic field strength and topology. Measurement of CME magnetic
fields, both in the corona and heliosphere, is essential for improving space
weather forecasting. Observations at radio wavelengths can provide several
remote measurement tools for estimating both strength and topology of the CME
magnetic fields. Among them, gyrosynchrotron (GS) emission produced by
mildly-relativistic electrons trapped in CME magnetic fields is one of the
promising methods to estimate magnetic field strength of CMEs at lower and
middle coronal heights. However, GS emissions from some parts of the CME are
much fainter than the quiet Sun emission and require high dynamic range (DR)
imaging for their detection. This thesis presents a state-of-the-art
calibration and imaging algorithm capable of routinely producing high DR
spectropolarimetric snapshot solar radio images using data from a new
technology radio telescope, the Murchison Widefield Array. This allows us to
detect much fainter GS emissions from CME plasma at much higher coronal
heights. For the first time, robust circular polarization measurements have
been jointly used with total intensity measurements to constrain the GS model
parameters, which has significantly improved the robustness of the estimated GS
model parameters. A piece of observational evidence is also found that
routinely used homogeneous and isotropic GS models may not always be sufficient
to model the observations. In the future, with upcoming sensitive telescopes
and physics-based forward models, it should be possible to relax some of these
assumptions and make this method more robust for estimating CME plasma
parameters at coronal heights.Comment: 297 pages, 100 figures, 9 tables. Submitted at Tata Institute of
Fundamental Research, Mumbai, India, Ph.D Thesi
A kinetic Fokker-Planck algorithm for simulating multiscale gas flows
Numerical, aerodynamic analysis of spacecraft requires the modeling of rarefied hypersonic flows. Such flow regimes are usually dominated by broad shock waves and strong expansion flows. In such areas of the flow the gas is far from its equilibrium state and therefore conventional modeling approaches such as the Euler or Navier-Stokes equations cannot be used. Instead, non-equilibrium modeling approaches must be applied. While most non-equilibrium flow solvers are computationally expensive, a recently introduced kinetic Fokker-Planck (FP) method shows the potential of describing non-equilibrium flows with satisfactory accuracy and, at the same time, significantly reducing computational costs. However, the application of kinetic FP solvers was so far still limited to simple, single species gases.
The aim of this study is to extend the capabilities of the kinetic FP approach for describing complex gas flows. Particular attention is paid to the modeling of non-equilibrium aerodynamics, as it is relevant for describing spacecraft related gas flows.
Methods for describing polyatomic species as well as gas mixtures within the kinetic FP framework are constructed. All models are intensively validated by comparison to already established numerical methods, as well as in comparison to experimental studies.
Excited energy states are modeled by a stochastic jump process described by a master equation. This approach allows the description of both continuous and discrete energy levels. Gas mixtures are modeled based on the hard-sphere and variable hard-sphere collision potentials. For both cases, FP models are constructed for an arbitrary number of species. The efficiency of the described models is investigated and different strategies are proposed to use kinetic FP methods efficiently.
The expansion of synthetic air from an axially symmetric orifice is numerically reproduced using the developed models and results are compared with experimental measurements. Although the numerical simulations capture several magnitudes of Knudsen numbers, from the continuum flow in the reservoir up to the free-molecular far field, good agreement between simulation and experiment is seen
Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation
Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors.
To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach.
A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk.
The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D
Neutrinos from horizon to sub-galactic scales
A first determination of the mass scale set by the lightest neutrino remains a crucial outstanding challenge for cosmology and particle physics, with profound implications for the history of the Universe and physics beyond the Standard Model. In this thesis, we present the results from three methodological papers and two applications that contribute to our understanding of the cosmic neutrino background.
First, we introduce a new method for the noise-suppressed evaluation of neutrino phase-space statistics. Its primary application is in cosmological N-body simulations, where it reduces the computational cost of simulating neutrinos by orders of magnitude without neglecting their nonlinear evolution. Second, using a recursive formulation of Lagrangian perturbation theory, we derive higher-order neutrino corrections and show that these can be used for the accurate and consistent initialisation of cosmological neutrino simulations. Third, we present a new code for the initialisation of neutrino particles, accounting both for relativistic effects and the full Boltzmann hierarchy. Taken together, these papers demonstrate that with the combination of the methods described therein, we can accurately simulate the evolution of the neutrino background over 13.8 Gyr from the linear and ultra-relativistic regime at down to the non-relativistic yet nonlinear regime at . Moreover, they show that the accuracy of large-scale structure predictions can be controlled at the sub-percent level needed for a neutrino mass determination.
In a first application of these methods, we present a forecast for direct detection of the neutrino background, taking into account the gravitational enhancement (or indeed suppression) of the local density due to the Milky Way and the observed large-scale structure within 200 Mpc/h. We determine that the large-scale structure is more important than the Milky Way for neutrino masses below 0.1 eV, predict the orientation of the neutrino dipole, and study small-scale anisotropies. We predict that the angular distribution of neutrinos is anti-correlated with the projected matter density, due to the capture or deflection of neutrinos by massive objects along the line of sight.
Finally, we present the first results from a new suite of hydrodynamical simulations, which includes the largest ever simulation with neutrinos and galaxies. We study the extent to which variations in neutrino mass can be treated independently of astrophysical processes, such as feedback from supernovae and black holes. Our findings show that baryonic feedback is weakly dependent on neutrino mass, with feedback being stronger for models with larger neutrino masses. By studying individual dark matter halos, we attribute this effect to the increased baryon density relative to cold dark matter and a reduction in the binding energies of halos. We show that percent-level accurate modelling of the matter power spectrum in a cosmologically interesting parameter range is only possible if the cosmology-dependence of feedback is taken into account
Effect of dynamical screening in the Bethe-Salpeter framework: Excitons in crystalline naphthalene
Solving the Bethe-Salpeter equation (BSE) for the optical polarization
functions is a first principles means to model optical properties of materials
including excitonic effects. One almost ubiquitously used approximation
neglects the frequency dependence of the screened electron-hole interaction.
This is commonly justified by the large difference in magnitude of electronic
plasma frequency and exciton binding energy. We incorporated dynamical effects
into the screening of the electron-hole interaction in the BSE using two
different approximations as well as exact diagonalization of the exciton
Hamiltonian. We compare these approaches for a naphthalene organic crystal, for
which the difference between exciton binding energy and plasma frequency is
only about a factor of ten. Our results show that in this case, corrections due
to dynamical screening are about 15\,\% of the exciton binding energy. We
analyze the effect of screening dynamics on optical absorption across the
visible spectral range and use our data to establish an \emph{effective}
screening model as a computationally efficient approach to approximate
dynamical effects in complex materials in the future.Comment: 11 pages main text, 5 figures main text, 9 pages supplemental, 6
figures supplementa
Diagnostic Methods for the Characterization of a Helicon Plasma Thruster
Programa de Doctorado en Mecánica de Fluidos por la Universidad Carlos III de Madrid; la Universidad de Jaén; la Universidad de Zaragoza; la Universidad Nacional de Educación a Distancia; la Universidad Politécnica de Madrid y la Universidad Rovira iPresidente: José Javier Honrubia Checa.- Secretario: José Miguel Reynolds Barredo.- Vocal: Eduardo de la Ca
- …