4,729 research outputs found

    Passive radar parallel processing using General-Purpose computing on Graphics Processing Units

    Get PDF
    In the paper an implementation of signal processing chain for a passive radar is presented. The passive radar which was developed at the Warsaw University of Technology, uses FM radio and DVB-T television transmitters as "illuminators of opportunity". As the computational load associated with passive radar processing is very high, NVIDIA CUDA technology has been employed for effective implementation using parallel processing. The paper contains the description of the algorithms implementation and the performance results analysis

    WavePacket: A Matlab package for numerical quantum dynamics. II: Open quantum systems, optimal control, and model reduction

    Full text link
    WavePacket is an open-source program package for numeric simulations in quantum dynamics. It can solve time-independent or time-dependent linear Schr\"odinger and Liouville-von Neumann-equations in one or more dimensions. Also coupled equations can be treated, which allows, e.g., to simulate molecular quantum dynamics beyond the Born-Oppenheimer approximation. Optionally accounting for the interaction with external electric fields within the semi-classical dipole approximation, WavePacket can be used to simulate experiments involving tailored light pulses in photo-induced physics or chemistry. Being highly versatile and offering visualization of quantum dynamics 'on the fly', WavePacket is well suited for teaching or research projects in atomic, molecular and optical physics as well as in physical or theoretical chemistry. Building on the previous Part I which dealt with closed quantum systems and discrete variable representations, the present Part II focuses on the dynamics of open quantum systems, with Lindblad operators modeling dissipation and dephasing. This part also describes the WavePacket function for optimal control of quantum dynamics, building on rapid monotonically convergent iteration methods. Furthermore, two different approaches to dimension reduction implemented in WavePacket are documented here. In the first one, a balancing transformation based on the concepts of controllability and observability Gramians is used to identify states that are neither well controllable nor well observable. Those states are either truncated or averaged out. In the other approach, the H2-error for a given reduced dimensionality is minimized by H2 optimal model reduction techniques, utilizing a bilinear iterative rational Krylov algorithm

    Focal-plane wavefront sensing with high-order adaptive optics systems

    Full text link
    We investigate methods to calibrate the non-common path aberrations at an adaptive optics system having a wavefront-correcting device working at an extremely high resolution (larger than 150x150). We use focal-plane images collected successively, the corresponding phase-diversity information and numerically efficient algorithms to calculate the required wavefront updates. The wavefront correction is applied iteratively until the algorithms converge. Different approaches are studied. In addition of the standard Gerchberg-Saxton algorithm, we test the extension of the Fast & Furious algorithm that uses three images and creates an estimate of the pupil amplitudes. We also test recently proposed phase-retrieval methods based on convex optimisation. The results indicate that in the framework we consider, the calibration task is easiest with algorithms similar to the Fast & Furious.Comment: 11 pages, 7 figures, published in SPIE proceeding

    Compression and Conditional Emulation of Climate Model Output

    Full text link
    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus it is important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. The statistical model can be used to generate realizations representing the full dataset, along with characterizations of the uncertainties in the generated data. Thus, the methods are capable of both compression and conditional emulation of the climate models. Considerable attention is paid to accurately modeling the original dataset--one year of daily mean temperature data--particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers
    corecore