3,107 research outputs found
Utilizing Near-Field Measurements to Characterize Far-Field Radar Signatures
The increased need for stealth aircraft requires an on-site Far-Field (FF) Radar Cross-Section (RCS) measurement process. Conducting these measurements in on-site Near-Field (NF) monostatic facilities results in significant savings for manufacturers and acquisition programs. However, NF measurements are not directly extended to a FF RCS. Therefore, a large target Near-Field to Far-Field Transformation (NFFFT) is needed for RCS measurements. One approach requires an Inverse Synthetic Aperture Radar (ISAR) process to create accurate scattering maps. The focus of this work is the development of accurate NF scattering maps generated by a monostatic ISAR process. As a first look, the process is isolated to a simulated environment to avoid the uncontrollable effects of real measurement environments. The simulation begins with a NF Synthetic Target Generator (STG) which approximates a target using scattering centers illuminated by spherical electromagnetic waves to approximating NF scattering. The resulting NF In-phase and Quadrature (IQ) data is used in a Trapezoidal ISAR process to create spatially distorted images that are accurately corrected within the ISAR process resolution using a newly developed NF correction. The resulting spatially accurate ISAR images do not complete the NFFFT. However, accurate scattering maps are essential for process development
On Moving Least Squares Based Flow Visualization
Modern simulation and measurement methods tend to produce meshfree data sets if modeling of processes or objects with free surfaces or boundaries is desired. In Computational Fluid Dynamics (CFD), such data sets are described by particle-based vector fields. This paper presents a summary of a selection of methods for the extraction of geometric features of such point-based vector fields while pointing out its challenges, limitations, and applications
Recommended from our members
Modelling the joint distribution of competing risks survival times using copula functions
The problem of modelling the joint distribution of survival times in a competing risks model, using copula functions is considered. In order to evaluate this joint distribution and the related overall survival function, a system of non-linear differential equations is solved, which relates the crude and net survival functions of the modelled competing risks, through the copula. A similar approach to modelling dependent multiple decrements was applied by Carriere (1994) who used a Gaussian copula applied to an incomplete double decrement model which makes it difficult to calculate any actuarial functions and draw relevant conclusions. Here, we extend this methodology by studying the effect of complete and partial elimination of up to four competing risks on the overall survival function, the life expectancy and life annuity values. We further investigate how different choices of the copula function affect the resulting joint distribution of survival times and in particular the actuarial functions which are of importance in pricing life insurance and annuity products. For illustrative purposes, we have used a real data set and used extrapolation to prepare a complete multiple decrement model up to age 120. Extensive numerical results illustrate the sensitivity of the model with respect to the choice ofcopula and its parameter(s)
High Redshift Supernovae in the Hubble Deep Field
Two supernovae detected in the Hubble Deep Field using the original December
1995 epoch and data from a shorter (63000 s in F814W) December 1997 visit with
HST are discussed. The supernovae (SNe) are both associated with distinct
galaxies at redshifts of 0.95 (spectroscopic) from Cohen etal. (1996) and 1.32
(photometric) from the work of Fernandez-Soto, Lanzetta, and Yahil (1998).
These redshifts are near, in the case of 0.95, and well beyond for 1.32 the
greatest distance reported previously for SNe. We show that our observations
are sensitive to SNe to z < 1.8 in either epoch for an event near peak
brightness. Detailed simulations are discussed that quantify the level at which
false events from our search phase would start to to arise, and the
completeness of our search as a function of both SN brightness and host galaxy
redshift. The number of Type Ia and Type II SNe expected as a function of
redshift in the two HDF epochs are discussed in relation to several published
predictions and our own detailed calculations. A mean detection frequency of
one SN per epoch for the small HDF area is consistent with expectations from
current theory.Comment: 62 pages, 17 figures, ApJ 1999 in pres
Reconstruction of 3D Image for Particles By the Method of Angular Correlations from XFEL Data
The world’s first X-ray Free Electron Laser (XFEL), the Linac Coherent Light Source (LCLS) at the Stanford Linear Accelerator Center (SLAC), is now generating X-ray pulses of unprecedented brilliance (one billion times brighter than the most powerful existing sources), and at the amazing rate of only a few femtoseconds. The first such experiments are being performed on relatively large objects such as viruses, which produce low resolution, low-noise diffraction patterns on the basis of the so called “diffraction before destruction” principle. Despite the promise of using XFEL for the determination of the structures of viruses, the results so far from experimental data present difficulties in working to reconstruct 3D images for the viruses by our method. One of the rare instances in which images are reconstructed from experimental data is the mimi virus work of Hajdu et al. In this present paper, we examine the capabilities of the method that is based on the angular momentum decomposition of scattered intensities, which enables us to overcome common problems such as missing or imperfect data that are inevitable in experiments. This angular momentum decomposition method helps to avoid the effect of a finite beam size, and existing gap size. In addition to the problem caused by the finite panels of detectors used when the data are collected, the effect of noise, curved Ewald Sphere, shot to shot variations of incident X-ray pulse intensities and shots to multiple nano particles are also studied
Cloning Dropouts: Implications for Galaxy Evolution at High Redshift
The evolution of high redshift galaxies in the two Hubble Deep Fields, HDF-N
and HDF-S, is investigated using a cloning technique that replicates z~ 2-3 U
dropouts to higher redshifts, allowing a comparison with the observed B and V
dropouts at higher redshifts (z ~ 4-5). We treat each galaxy selected for
replication as a set of pixels that are k-corrected to higher redshift,
accounting for resampling, shot-noise, surface-brightness dimming, and the
cosmological model. We find evidence for size evolution (a 1.7x increase) from
z ~ 5 to z ~ 2.7 for flat geometries (Omega_M+Omega_LAMBDA=1.0). Simple scaling
laws for this cosmology predict that size evolution goes as (1+z)^{-1},
consistent with our result. The UV luminosity density shows a similar increase
(1.85x) from z ~ 5 to z ~ 2.7, with minimal evolution in the distribution of
intrinsic colors for the dropout population. In general, these results indicate
less evolution than was previously reported, and therefore a higher luminosity
density at z ~ 4-5 (~ 50% higher) than other estimates. We argue the present
technique is the preferred way to understand evolution across samples with
differing selection functions, the most relevant differences here being the
color cuts and surface brightness thresholds (e.g., due to the (1+z)^4 cosmic
surface brightness dimming effect).Comment: 56 pages, 22 figures, accepted for publication in Ap
Radiative transfer with scattering for domain-decomposed 3D MHD simulations of cool stellar atmospheres
We present the implementation of a radiative transfer solver with coherent
scattering in the new BIFROST code for radiative magneto-hydrodynamical (MHD)
simulations of stellar surface convection. The code is fully parallelized using
MPI domain decomposition, which allows for large grid sizes and improved
resolution of hydrodynamical structures. We apply the code to simulate the
surface granulation in a solar-type star, ignoring magnetic fields, and
investigate the importance of coherent scattering for the atmospheric
structure. A scattering term is added to the radiative transfer equation,
requiring an iterative computation of the radiation field. We use a
short-characteristics-based Gauss-Seidel acceleration scheme to compute
radiative flux divergences for the energy equation. The effects of coherent
scattering are tested by comparing the temperature stratification of three 3D
time-dependent hydrodynamical atmosphere models of a solar-type star: without
scattering, with continuum scattering only, and with both continuum and line
scattering. We show that continuum scattering does not have a significant
impact on the photospheric temperature structure for a star like the Sun.
Including scattering in line-blanketing, however, leads to a decrease of
temperatures by about 350\,K below log tau < -4. The effect is opposite to that
of 1D hydrostatic models in radiative equilibrium, where scattering reduces the
cooling effect of strong LTE lines in the higher layers of the photosphere.
Coherent line scattering also changes the temperature distribution in the high
atmosphere, where we observe stronger fluctuations compared to a treatment of
lines as true absorbers.Comment: A&A, in pres
Development and applications of the Finite Point Method to compressible aerodynamics problems
This work deals with the development and application of the Finite Point Method (FPM) to compressible aerodynamics problems. The research focuses mainly on investigating the capabilities of the meshless technique to address practical problems, one of the most outstanding issues in meshless methods.
The FPM spatial approximation is studied firstly, with emphasis on aspects of the methodology that can be improved to increase its robustness and accuracy. Suitable ranges for setting the relevant approximation parameters and the performance likely to be attained in practice are determined. An automatic procedure to adjust the approximation parameters is also proposed to simplify the application of the method, reducing problem- and user-dependence without affecting the flexibility of the meshless technique.
The discretization of the flow equations is carried out following wellestablished approaches, but drawing on the meshless character of the
methodology. In order to meet the requirements of practical applications, the procedures are designed and implemented placing emphasis on robustness and efficiency (a simplification of the basic FPM technique is proposed to this end). The flow solver is based on an upwind spatial discretization of the convective fluxes (using the approximate Riemann solver of Roe) and an explicit time integration scheme. Two additional artificial diffusion schemes are also proposed to suit those cases of study in which computational cost is a major concern. The performance of the flow solver is evaluated in order to determine the potential of the meshless approach. The accuracy, computational cost and parallel scalability of the method are studied in comparison with a conventional FEM-based technique.
Finally, practical applications and extensions of the flow solution scheme are presented. The examples provided are intended not only to show the
capabilities of the FPM, but also to exploit meshless advantages. Automatic hadaptive procedures, moving domain and fluid-structure interaction problems, as well as a preliminary approach to solve high-Reynolds viscous flows, are a sample of the topics explored.
All in all, the results obtained are satisfactorily accurate and competitive in terms of computational cost (if compared with a similar mesh-based
implementation). This indicates that meshless advantages can be exploited with efficiency and constitutes a good starting point towards more challenging applications.En este trabajo se aborda el desarrollo del Método de Puntos Finitos (MPF) y su aplicación a problemas de aerodinámica de flujos compresibles. El objetivo principal es investigar el potencial de la técnica sin malla para la solución de problemas prácticos, lo cual constituye una de las limitaciones más importantes de los métodos sin malla.
En primer lugar se estudia la aproximación espacial en el MPF, haciendo hincapié en aquéllos aspectos que pueden ser mejorados para incrementar la robustez y exactitud de la metodología. Se determinan rangos adecuados para el ajuste de los parámetros de la aproximación y su comportamiento en situaciones prácticas. Se propone además un procedimiento de ajuste automático de estos parámetros a fin de simplificar la aplicación del método y reducir la dependencia de factores como el tipo de problema y la intervención del usuario, sin afectar la flexibilidad de la técnica sin malla.
A continuación se aborda el esquema de solución de las ecuaciones del flujo. La discretización de las mismas se lleva a cabo siguiendo métodos estándar, pero aprovechando las características de la técnica sin malla. Con el objetivo de abordar problemas prácticos, se pone énfasis en la robustez y eficiencia de la implementación numérica (se propone además una simplificación del procedimiento de solución). El comportamiento del esquema se estudia en detalle para evaluar su potencial y se analiza su exactitud, coste computacional y escalabilidad, todo ello en comparación con un método convencional basado en Elementos Finitos.
Finalmente se presentan distintas aplicaciones y extensiones de la metodología desarrollada. Los ejemplos numéricos pretenden demostrar las
capacidades del método y también aprovechar las ventajas de la metodología sin malla en áreas en que la misma puede ser de especial interés. Los problemas tratados incluyen, entre otras características, el refinamiento automático de la discretización, la presencia de fronteras móviles e
interacción fluido-estructura, como así también una aplicación preliminar a flujos compresibles de alto número de Reynolds. Los resultados obtenidos muestran una exactitud satisfactoria. Además, en comparación con una técnica similar basada en Elementos Finitos, demuestran ser competitivos en términos del coste computacional. Esto indica que las ventajas de la metodología sin malla pueden ser explotadas con eficiencia, lo cual constituye un buen punto de partida para el desarrollo de ulteriores aplicaciones.Postprint (published version
Use of Machine Learning for Automated Convergence of Numerical Iterative Schemes
Convergence of a numerical solution scheme occurs when a sequence of increasingly refined iterative solutions approaches a value consistent with the modeled phenomenon. Approximations using iterative schemes need to satisfy convergence criteria, such as reaching a specific error tolerance or number of iterations. The schemes often bypass the criteria or prematurely converge because of oscillations that may be inherent to the solution. Using a Support Vector Machines (SVM) machine learning approach, an algorithm is designed to use the source data to train a model to predict convergence in the solution process and stop unnecessary iterations. The discretization of the Navier Stokes (NS) equations for a transient local hemodynamics case requires determining a pressure correction term from a Poisson-like equation at every time-step. The pressure correction solution must fully converge to avoid introducing a mass imbalance. Considering time, frequency, and time-frequency domain features of its residual’s behavior, the algorithm trains an SVM model to predict the convergence of the Poisson equation iterative solver so that the time-marching process can move forward efficiently and effectively. The fluid flow model integrates peripheral circulation using a lumped-parameter model (LPM) to capture the field pressures and flows across various circulatory compartments. Machine learning opens the doors to an intelligent approach for iterative solutions by replacing prescribed criteria with an algorithm that uses the data set itself to predict convergence
Development and applications of the finite point method to compressible aerodynamics problems
This work deals with the development and application of the Finite Point
Method (FPM) to compressible aerodynamics problems. The research focuses
mainly on investigating the capabilities of the meshless technique to address
practical problems, one of the most outstanding issues in meshless methods.
The FPM spatial approximation is studied firstly, with emphasis on aspects of
the methodology that can be improved to increase its robustness and accuracy.
Suitable ranges for setting the relevant approximation parameters and the
performance likely to be attained in practice are determined. An automatic
procedure to adjust the approximation parameters is also proposed to simplify
the application of the method, reducing problem- and user-dependence
without affecting the flexibility of the meshless technique.
The discretization of the flow equations is carried out following wellestablished
approaches, but drawing on the meshless character of the methodology. In order to meet the requirements of practical applications, the procedures are designed and implemented placing emphasis on robustness and efficiency (a simplification of the basic FPM technique is proposed to this end). The flow solver is based on an upwind spatial discretization of the convective fluxes (using the approximate Riemann solver of Roe) and an explicit time integration scheme. Two additional artificial diffusion schemes are also proposed to suit those cases of study in which computational cost is a major concern. The performance of the flow solver is evaluated in order to determine the potential of the meshless approach. The accuracy, computational cost and parallel scalability of the method are studied in
comparison with a conventional FEM-based technique.
Finally, practical applications and extensions of the flow solution scheme are
presented. The examples provided are intended not only to show the
capabilities of the FPM, but also to exploit meshless advantages. Automatic hadaptive procedures, moving domain and fluid-structure interaction problems,
as well as a preliminary approach to solve high-Reynolds viscous flows, are a
sample of the topics explored.
All in all, the results obtained are satisfactorily accurate and competitive in
terms of computational cost (if compared with a similar mesh-based
implementation). This indicates that meshless advantages can be exploited
with efficiency and constitutes a good starting point towards more challenging
applications
- …