187 research outputs found

    Advanced interferometric techniques for high resolution bathymetry

    No full text
    International audienceCurrent high-resolution side scan and multibeam sonars produce very large data sets. However, conventional interferometry-based bathymetry algorithms underestimate the potential information of such soundings, generally because they use small baselines to avoid phase ambiguity. Moreover, these algorithms limit the triangulation capabilities of multibeam echosounders to the detection of one sample per beam, i.e., the zero-phase instant. In this paper we argue that the correlation between signals plays a very important role in the exploration of a remotely observed scene. In the case of multibeam sonars, capabilities can be improved by using the interferometric signal as a continuous quantity. This allows consideration of many more useful soundings per beam and enriches understanding of the environment. To this end, continuous interferometry detection is compared here, from a statistical perspective, first with conventional interferometry-based algorithms and then with high-resolution methods, such as the Multiple Signal Classification (MUSIC) algorithm. We demonstrate that a well-designed interferometry algorithm based on a coherence error model and an optimal array configuration permits a reduction in the number of beam formings (and therefore the computational cost) and an improvement in target detection (such as ship mooring cables or masts). A possible interferometry processing algorithm based on the complex correlation between received signals is tested on both sidescan sonars and multibeam echosounders and shows promising results for detection of small in-water targets

    Methods of Error Estimation for Delay Power Spectra in 21 cm Cosmology

    Get PDF
    Precise measurements of the 21 cm power spectrum are crucial for understanding the physical processes of hydrogen reionization. Currently, this probe is being pursued by low-frequency radio interferometer arrays. As these experiments come closer to making a first detection of the signal, error estimation will play an increasingly important role in setting robust measurements. Using the delay power spectrum approach, we have produced a critical examination of different ways that one can estimate error bars on the power spectrum. We do this through a synthesis of analytic work, simulations of toy models, and tests on small amounts of real data. We find that, although computed independently, the different error bar methodologies are in good agreement with each other in the noise-dominated regime of the power spectrum. For our preferred methodology, the predicted probability distribution function is consistent with the empirical noise power distributions from both simulated and real data. This diagnosis is mainly in support of the forthcoming HERA upper limit and also is expected to be more generally applicable

    New Developments in Covariance Modeling and Coregionalization for the Study and Simulation of Natural Phenomena

    Get PDF
    RÉSUMÉ La géostatistique s’intéresse à la modélisation des phénomènes naturels par des champs aléatoires univariables ou multivariables. La plupart des applications utilisent un modèle stationnaire pour représenter le phénomène étudié. Il est maintenant reconnu que ce modèle n’est pas assez flexible pour représenter adéquatement un phénomène naturel montrant des comportements qui varient considérablement dans l’espace (un exemple simple de cette hétérogénéité est le problème de l’estimation de l’épaisseur du mort-terrain en présence d’affleurements). Pour le cas univariable, quelques modèles non-stationnaires ont été développés récemment. Toutefois, ces modèles n’ont pas un support compact, ce qui limite leur domaine d’application. Il y a un réel besoin d’enrichir la classe des modèles non-stationnaires univariable, le premier objectif poursuivi par cette thèse.----------ABSTRACT Geostatistics focus on modeling natural phenomena by univariate or multivariate spatial random fields. Most applications rely on the choice of a stationary model to represent the studied phenomenon. It is now acknowledged that this model is not flexible enough to adequately represent a natural phenomenon showing behaviors that vary substantially in space (a simple example of such heterogeneity is the problem of estimating overburden thickness in the presence of outcrops). For the univariate case, a few non-stationary models were developed recently. However, these models do not have compact support, which limits in practice their range of application. There is a definite need to enlarge the class of univariate non-stationary models, a first goal pursued by this thesis

    Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos

    Get PDF
    High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs

    Study of communications data compression methods

    Get PDF
    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance

    Development of a Group Dynamic Functional Connectivity Pipeline for Magnetoencephalography Data and its Application to the Human Face Processing Network

    Get PDF
    Since its inception, functional neuroimaging has focused on identifying sources of neural activity. Recently, interest has turned to the analysis of connectivity between neural sources in dynamic brain networks. This new interest calls for the development of appropriate investigative techniques. A problem occurs in connectivity studies when the differing networks of individually analyzed subjects must be reconciled. One solution, the estimation of group models, has become common in fMRI, but is largely untried with electromagnetic data. Additionally, the assumption of stationarity has crept into the field, precluding the analysis of dynamic systems. Group extensions are applied to the sparse irMxNE localizer of MNE-Python. Spectral estimation requires individual source trials, and a multivariate multiple regression procedure is established to accomplish this based on the irMxNE output. A program based on the Fieldtrip software is created to estimate conditional Granger causality spectra in the time-frequency domain based on these trials. End-to-end simulations support the correctness of the pipeline with single and multiple subjects. Group-irMxNE makes no attempt to generalize a solution between subjects with clearly distinct patterns of source connectivity, but shows signs of doing so when subjects patterns of activity are similar. The pipeline is applied to MEG data from the facial emotion protocol in an attempt to validate the Adolphs model. Both irMxNE and Group-irMxNE place numerous sources during post-stimulus periods of high evoked power but neglect those of low power. This identifies a conflict between power-based localizations and information-centric processing models. It is also noted that neural processing is more diffuse than the neatly specified Adolphs model indicates. Individual and group results generally support early processing in the occipital, parietal, and temporal regions, but later stage frontal localizations are missing. The morphing of individual subjects\u27 brain topology to a common source-space is currently inoperable in MNE. MEG data is therefore co-registered directly onto an average brain, resulting in loss of accuracy. For this as well as reasons related to uneven power and computational limitations, the early stages of the Adolphs model are only generally validated. Encouraging results indicate that actual non-stationary group connectivity estimates are produced however

    Hierarchical Bayesian Fuzzy Clustering Approach for High Dimensional Linear Time-Series

    Get PDF
    This paper develops a computational approach to improve fuzzy clustering and forecasting performance when dealing with endogeneity issues and misspecified dynamics in high dimensional dynamic data. Hierarchical Bayesian methods are used to structure linear time variations, reduce dimensionality, and compute a distance function capturing the most probable set of clusters among univariate and multivariate time-series. Nonlinearities involved in the procedure look like permanent shifts and are replaced by coefficient changes. Monte Carlo implementations are also addressed to compute exact posterior probabilities for each cluster chosen and then minimize the increasing probability of outliers plaguing traditional clustering time-series techniques. An empirical example highlights the strengths and limitations of the estimating procedure. Discussions with related works are also displayed

    Interferometric SAR imaging of ocean surface currents and wavefields

    Get PDF
    This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. As such, it is in the public domain, and under the provisions of Title 17, United States Code, Section 105, may not be copyrighted.The potential of a method to remotely measure near-surface currents and dominant wave spectra using Interferometric Synthetic Aperture Radar INSAR is demonstrated. INSAR consists of a single conventional SAR augmented by an additional receiving antenna. The phase difference between corresponding SAR image scenes observed by the antennas provides an interferogram directly proportional to the ocean surface velocity field. This direct motion detection by INSAR suggests a significant advance compared with conventional SAR where the response to the moving ocean surface is indirectly related to complex modulation of the surface reflectivity by longer waves and currents. An experiment using an airborne INSAR to measure ocean surface currents and wave fields, compared with simultaneous ground truth measurements using Lagrangian drifters and wave array data was conducted in Monterey Bay. INSAR measured mean current magnitude estimates agree to within 10% compared with conventional measurements. The INSAR image wavenumber spectrum is consistent with the in situ directional spectrum and with predicted numerical refraction model outputs. Wavelength of the observed swells is better agreement (correlation better than 0.9) than wave direction. An attempt to estimate the scene coherence time for L-band SAR was made by taking advantage of the almost simultaneously acquired SAR and INSAR images. The obtained mean scene coherence time (100 msec) is consistent with sparse observed estimates in the literature.http://archive.org/details/interferometrics1094535036Commander, Israeli NavyApproved for public release; distribution is unlimited
    corecore