648 research outputs found

    Institute for Space Studies of Catalonia

    Get PDF

    Optical calibration of large format adaptive mirrors

    Full text link
    Adaptive (or deformable) mirrors are widely used as wavefront correctors in adaptive optics systems. The optical calibration of an adaptive mirror is a fundamental step during its life-cycle: the process is in facts required to compute a set of known commands to operate the adaptive optics system, to compensate alignment and non common-path aberrations, to run chopped or field-stabilized acquisitions. In this work we present the sequence of operations for the optical calibration of adaptive mirrors, with a specific focus on large aperture systems such as the adaptive secondaries. Such systems will be one of the core components of the extremely large telescopes. Beyond presenting the optical procedures, we discuss in detail the actors, their functional requirements and the mutual interactions. A specific emphasys is put on automation, through a clear identification of inputs, outputs and quality indicators for each step: due to a high degrees-of-freedom count (thousands of actuators), an automated approach is preferable to constraint the cost and schedule. In the end we present some algorithms for the evaluation of the measurement noise; this point is particularly important since the calibration setup is typically a large facility in an industrial environment, where the noise level may be a major show-stopper.Comment: 50 pages. Final report released for the project "Development and test of a new CGH-based technique with automated calibration for future large format Adaptive-Optics Mirrors", funded under the INAF -TecnoPRIN 2010. Published by INAF - Osservatorio Astrofisico di Arcetri. ISBN: 978-88-908876-1-

    Domain decomposition methods for the parallel computation of reacting flows

    Get PDF
    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors

    Space Time MUSIC: Consistent Signal Subspace Estimation for Wide-band Sensor Arrays

    Full text link
    Wide-band Direction of Arrival (DOA) estimation with sensor arrays is an essential task in sonar, radar, acoustics, biomedical and multimedia applications. Many state of the art wide-band DOA estimators coherently process frequency binned array outputs by approximate Maximum Likelihood, Weighted Subspace Fitting or focusing techniques. This paper shows that bin signals obtained by filter-bank approaches do not obey the finite rank narrow-band array model, because spectral leakage and the change of the array response with frequency within the bin create \emph{ghost sources} dependent on the particular realization of the source process. Therefore, existing DOA estimators based on binning cannot claim consistency even with the perfect knowledge of the array response. In this work, a more realistic array model with a finite length of the sensor impulse responses is assumed, which still has finite rank under a space-time formulation. It is shown that signal subspaces at arbitrary frequencies can be consistently recovered under mild conditions by applying MUSIC-type (ST-MUSIC) estimators to the dominant eigenvectors of the wide-band space-time sensor cross-correlation matrix. A novel Maximum Likelihood based ST-MUSIC subspace estimate is developed in order to recover consistency. The number of sources active at each frequency are estimated by Information Theoretic Criteria. The sample ST-MUSIC subspaces can be fed to any subspace fitting DOA estimator at single or multiple frequencies. Simulations confirm that the new technique clearly outperforms binning approaches at sufficiently high signal to noise ratio, when model mismatches exceed the noise floor.Comment: 15 pages, 10 figures. Accepted in a revised form by the IEEE Trans. on Signal Processing on 12 February 1918. @IEEE201

    Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology

    Get PDF
    The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals

    A room acoustics measurement system using non-invasive microphone arrays

    Get PDF
    This thesis summarises research into adaptive room correction for small rooms and pre-recorded material, for example music of films. A measurement system to predict the sound at a remote location within a room, without a microphone at that location was investigated. This would allow the sound within a room to be adaptively manipulated to ensure that all listeners received optimum sound, therefore increasing their enjoyment. The solution presented used small microphone arrays, mounted on the room's walls. A unique geometry and processing system was designed, incorporating three processing stages, temporal, spatial and spectral. The temporal processing identifies individual reflection arrival times from the recorded data. Spatial processing estimates the angles of arrival of the reflections so that the three-dimensional coordinates of the reflections' origin can be calculated. The spectral processing then estimates the frequency response of the reflection. These estimates allow a mathematical model of the room to be calculated, based on the acoustic measurements made in the actual room. The model can then be used to predict the sound at different locations within the room. A simulated model of a room was produced to allow fast development of algorithms. Measurements in real rooms were then conducted and analysed to verify the theoretical models developed and to aid further development of the system. Results from these measurements and simulations, for each processing stage are presented

    State-Space Approaches to Ultra-Wideband Doppler Processing

    Get PDF
    National security needs dictate the development of new radar systems capable of identifying and tracking exoatmospheric threats to aid our defense. These new radar systems feature reduced noise floors, electronic beam steering, and ultra-wide bandwidths, all of which facilitate threat discrimination. However, in order to identify missile attributes such as RF reflectivity, distance, and velocity, many existing processing algorithms rely upon narrow bandwidth assumptions that break down with increased signal bandwidth. We present a fresh investigation into these algorithms for removing bandwidth limitations and propose novel state-space and direct-data factoring formulations such as * the multidimensional extension to the Eigensystem Realization Algorithm, * employing state-space models in place of interpolation to obtain a form which admits a separation and isolation of solution components, * and side-stepping the joint diagonalization of state transition matrices, which commonly plagues methods like multidimensional ESPRIT. We then benchmark our approaches and relate the outcomes to the Cramer-Rao bound for the case of one and two adjacent reflectors to validate their conceptual design and identify those techniques that compare favorably to or improve upon existing practices

    Large Deviations and Importance Sampling for Systems of Slow-Fast Motion

    Full text link
    In this paper we develop the large deviations principle and a rigorous mathematical framework for asymptotically efficient importance sampling schemes for general, fully dependent systems of stochastic differential equations of slow and fast motion with small noise in the slow component. We assume periodicity with respect to the fast component. Depending on the interaction of the fast scale with the smallness of the noise, we get different behavior. We examine how one range of interaction differs from the other one both for the large deviations and for the importance sampling. We use the large deviations results to identify asymptotically optimal importance sampling schemes in each case. Standard Monte Carlo schemes perform poorly in the small noise limit. In the presence of multiscale aspects one faces additional difficulties and straightforward adaptation of importance sampling schemes for standard small noise diffusions will not produce efficient schemes. It turns out that one has to consider the so called cell problem from the homogenization theory for Hamilton-Jacobi-Bellman equations in order to guarantee asymptotic optimality. We use stochastic control arguments.Comment: More detailed proofs. Differences from the published version are editorial and typographica
    corecore