4,729 research outputs found
Passive radar parallel processing using General-Purpose computing on Graphics Processing Units
In the paper an implementation of signal processing chain for a passive radar is presented. The passive radar which was developed at the Warsaw University of Technology, uses FM radio and DVB-T television transmitters as "illuminators of opportunity". As the computational load associated with passive radar processing is very high, NVIDIA CUDA technology has been employed for effective implementation using parallel processing. The paper contains the description of the algorithms implementation and the performance results analysis
WavePacket: A Matlab package for numerical quantum dynamics. II: Open quantum systems, optimal control, and model reduction
WavePacket is an open-source program package for numeric simulations in
quantum dynamics. It can solve time-independent or time-dependent linear
Schr\"odinger and Liouville-von Neumann-equations in one or more dimensions.
Also coupled equations can be treated, which allows, e.g., to simulate
molecular quantum dynamics beyond the Born-Oppenheimer approximation.
Optionally accounting for the interaction with external electric fields within
the semi-classical dipole approximation, WavePacket can be used to simulate
experiments involving tailored light pulses in photo-induced physics or
chemistry. Being highly versatile and offering visualization of quantum
dynamics 'on the fly', WavePacket is well suited for teaching or research
projects in atomic, molecular and optical physics as well as in physical or
theoretical chemistry. Building on the previous Part I which dealt with closed
quantum systems and discrete variable representations, the present Part II
focuses on the dynamics of open quantum systems, with Lindblad operators
modeling dissipation and dephasing. This part also describes the WavePacket
function for optimal control of quantum dynamics, building on rapid
monotonically convergent iteration methods. Furthermore, two different
approaches to dimension reduction implemented in WavePacket are documented
here. In the first one, a balancing transformation based on the concepts of
controllability and observability Gramians is used to identify states that are
neither well controllable nor well observable. Those states are either
truncated or averaged out. In the other approach, the H2-error for a given
reduced dimensionality is minimized by H2 optimal model reduction techniques,
utilizing a bilinear iterative rational Krylov algorithm
Recommended from our members
Modeling and Simulation of Random Processes and Fields in Civil Engineering and Engineering Mechanics
This thesis covers several topics within computational modeling and simulation of problems arising in Civil Engineering and Applied Mechanics. There are two distinct parts. Part 1 covers work in modeling and analyzing heterogeneous materials using the eXtended Finite Element Method (XFEM) with arbitrarily shaped inclusions. A novel enrichment function, which can model arbitrarily shaped inclusions within the framework of XFEM, is proposed. The internal boundary of an arbitrarily shaped inclusion is first discretized, and a numerical enrichment function is constructed "on the fly" using spline interpolation. This thesis considers a piecewise cubic spline which is constructed from seven localized discrete boundary points. The enrichment function is then determined by solving numerically a nonlinear equation which determines the distance from any point to the spline curve. Parametric convergence studies are carried out to show the accuracy of this approach, compared to pointwise and linear segmentation of points, for the construction of the enrichment function in the case of simple inclusions and arbitrarily shaped inclusions in linear elasticity.
Moreover, the viability of this approach is illustrated on a Neo-Hookean hyperelastic material with a hole undergoing large deformation. In this case, the enrichment is able to adapt to the deformation and effectively capture the correct response without remeshing. Part 2 then moves on to research work in simulation of random processes and fields. Novel algorithms for simulating random processes and fields such as earthquakes, wind fields, and properties of functionally graded materials are discussed. Specifically, a methodology is presented to determine the Evolutionary Spectrum (ES) for non-stationary processes from a prescribed or measured non-stationary Auto-Correlation Function (ACF). Previously, the existence of such an inversion was unknown, let alone possible to compute or estimate. The classic integral expression suggested by Priestley, providing the ACF from the ES, is not invertible in a unique way so that the ES could be determined from a given ACF. However, the benefits of an efficient inversion from ACF to ES are vast. Consider for example various problems involving simulation of non-stationary processes or non-homogeneous fields, including non-stationary seismic ground motions as well as non-homogeneous material properties such as those of functionally graded materials.
In such cases, it is sometimes more convenient to estimate the ACF from measured data, rather than the ES. However, efficient simulation depends on knowing the ES. Even more important, simulation of non-Gaussian and non-stationary processes depends on this inversion, when following a spectral representation based approach. This work first examines the existence and uniqueness of such an inversion from the ACF to the ES under a set of special conditions and assumptions (since such an inversion is clearly not unique in the most general form). It then moves on to efficient methodologies of computing the inverse, including some established optimization techniques, as well as proposing a novel methodology. Its application within the framework of translation models for simulation of non-Gaussian, non-stationary processes is developed and discussed. Numerical examples are provided demonstrating the capabilities of the methodology.
Additionally in Part 2, a methodology is presented for efficient and accurate simulation of wind velocities along long span structures at a virtually infinite number of points. Currently, the standard approach is to model wind velocities as a multivariate stochastic process, characterized by a Cross-Spectral Density Matrix (CSDM). In other words, the wind velocities are modeled as discrete components of a vector process. To simulate sample functions of the vector process, the Spectral Representation Method (SRM) is used. The SRM involves a Cholesky decomposition of the CSDM. However, it is a well known issue that as the length of the structure, and consequently the size of the vector process, increases, this Cholesky decomposition breaks down (from the numerical point of view). To avoid this issue, current research efforts in the literature center around approximate techniques to simplify the decomposition.
Alternatively, this thesis proposes the use of the frequency-wavenumber (F-K) spectrum to model the wind velocities as a stochastic "wave," continuous in both space and time. This allows the wind velocities to be modeled at a virtually infinite number of points along the length of the structure. In this work, the relationship between the CSDM and the F-K spectrum is first examined, as well as simulation techniques for both. The F-K spectrum for wind velocities is then derived. Numerical examples are then carried out demonstrating that the simulated wave samples exhibit the desired spectral and coherence characteristics. The efficiency of this method, specifically through the use of the Fast Fourier Transform, is demonstrated
Focal-plane wavefront sensing with high-order adaptive optics systems
We investigate methods to calibrate the non-common path aberrations at an
adaptive optics system having a wavefront-correcting device working at an
extremely high resolution (larger than 150x150). We use focal-plane images
collected successively, the corresponding phase-diversity information and
numerically efficient algorithms to calculate the required wavefront updates.
The wavefront correction is applied iteratively until the algorithms converge.
Different approaches are studied. In addition of the standard Gerchberg-Saxton
algorithm, we test the extension of the Fast & Furious algorithm that uses
three images and creates an estimate of the pupil amplitudes. We also test
recently proposed phase-retrieval methods based on convex optimisation. The
results indicate that in the framework we consider, the calibration task is
easiest with algorithms similar to the Fast & Furious.Comment: 11 pages, 7 figures, published in SPIE proceeding
Compression and Conditional Emulation of Climate Model Output
Numerical climate model simulations run at high spatial and temporal
resolutions generate massive quantities of data. As our computing capabilities
continue to increase, storing all of the data is not sustainable, and thus it
is important to develop methods for representing the full datasets by smaller
compressed versions. We propose a statistical compression and decompression
algorithm based on storing a set of summary statistics as well as a statistical
model describing the conditional distribution of the full dataset given the
summary statistics. The statistical model can be used to generate realizations
representing the full dataset, along with characterizations of the
uncertainties in the generated data. Thus, the methods are capable of both
compression and conditional emulation of the climate models. Considerable
attention is paid to accurately modeling the original dataset--one year of
daily mean temperature data--particularly with regard to the inherent spatial
nonstationarity in global fields, and to determining the statistics to be
stored, so that the variation in the original data can be closely captured,
while allowing for fast decompression and conditional emulation on modest
computers
- …