64 research outputs found

    Mapping and modelling the spatial variation in strain accumulation along the North Anatolian Fault

    Get PDF
    Since 1900, earthquakes worldwide have been responsible for over 2 million fatalities and caused nearly $2 trillion of economic damage. Accurate assessment of earthquake hazard is therefore critical for nations in seismically active regions. For a complete understanding of seismic hazard, the temporal pattern of strain accumulation, which will eventually be released in earthquakes, needs to be understood. But earthquakes typically occur every few hundred to few thousand years on any individual fault, and our observations of deformation usually only cover time periods of a decade or less. For this reason, our knowledge of the temporal variation in strain accumulation rate is limited to insights gleaned from kinematic models of the earthquake cycle that use measurements of present-day strain to infer the behaviour on long time scales. Previous studies have attempted to address this issue by combining data from multiple faults with geological estimates of long-term strain rates. In this thesis I propose a different approach, which is to observe deformation at multiple stages of the earthquake cycle for a single fault with segments that that have failed at different times. In the last century the North Anatolian Fault (NAF) in Turkey has accommodated 12 large earthquakes (Mw >6.5) with a dominant westward progression in seismicity. If we assume that each of these fault segments are at a different stage of the earthquake cycle then this provides a unique opportunity to study the variation in along-strike surface deformation, which can be equated to variation of deformation in time. In this thesis I use Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) observations to examine the spatial distribution of strain along the NAF. InSAR is an attractive technique to study surface displacements at a much higher spatial resolution (providing a measurement every 30 m) compared to established GNSS measurements, with station separations between 10 km to 100 km in Turkey. I specifically address a key technical challenge that limits the wide uptake of InSAR: phase unwrapping, the process of recovering continuous phase values from phase data that are measured modulo 2π radians. I develop a new unwrapping procedure for small baseline InSAR measurements that iteratively unwraps InSAR phase. For each iteration, this method identifies pixels unwrapped correctly in the previous iteration and applies a high cost to changing the phase difference between these pixels in the next iteration. In this way, the iterative unwrapping method uses the error-free pixels as a guide to unwrap the regions that contained unwrapping errors in previous iterations. I combine measurements of InSAR line-of-sight displacements with published GNSS velocities to show that an ∼80 km section of the NAF that ruptured in the 1999 Izmit earthquake (Mw 7.4) is creeping at a steady rate of ∼5 mm/yr with a maximum rate of 11 ± 2 mm/yr near the city of Izmit within the observation period 2002-2010. I show that in terms of the moment budget and seismic hazard the effect of the shallow, aseismic slip in the past decade is small compared to that from plate loading. Projecting the shallow creep displacement rates late into the earthquake cycle does not produce enough slip to account for the 2-3 m shallow coseismic slip deficit observed in the Izmit earthquake. Therefore, distributed inelastic deformation in the uppermost few kilometers of the crust or slip transients during the interseismic period are likely to be important mechanisms for generating the shallow slip deficit. I used similar techniques to confirm that a ∼130 km section of the central NAF near the town of Ismetpasa, is also undergoing aseismic creep at a steady rate of 8±2 mm/yr. Using simple elastic dislocation models to fit fault perpendicular velocities I show that there is an eastward decreasing fault slip rate in this region from ∼32 mm/yr to ∼21 mm/yr over a distance of about 200 km. The cause of this decrease remains unclear, but it could be due to postseismic effects from the 1999 Izmit and Duzce earthquakes and/or long-term influence from the 1943 (Mw 7.4) and 1944 (Mw 7.5) earthquakes. Finally, I combine line-of-sight displacements from 23 InSAR tracks to produce the first high resolution horizontal velocity field for the entire continental expression of the NAF (∼1000 km). I show that the strain rate does not vary significantly along the fault, and since each segment of the NAF is at a different stage of the earthquake cycle, the strain rate is invariant with respect to the time since the last earthquake. This observation is inconsistent with viscoelastic coupling models of the earthquake cycle, which predict a decreasing strain rate with time after an earthquake. My observations imply that strain accumulation reaches a steady-state fairly rapidly after an earthquake (<7-10 years) after which strain is localised on a narrow shear zone centred on the fault and does not vary with time. A time-invariant strain rate is consistent with a strong lower crust in the region away from the fault with a viscosity ≥1020 Pas. My results imply that short term snapshots of the present-day strain accumulation (as long as it is after the postseismic period) are representative of the entire earthquake cycle, and therefore geodetic estimates of the strain rate can be used to estimate the total strain accumulation since the last earthquake on a fault, and be used as a proxy for future seismic hazard assessment. The techniques I developed to explore the spatial and temporal pattern of aseismic fault creep and long-term strain accumulation along the NAF are general and can be ap- plied to all strike-slip faults globally. The archived ERS-1/2 and Envisat satellite data are an extremely valuable resource that can and should be used to extend InSAR time series measurements back to the early 1990s. Together with the new Sentinel-1 data sets, this provides an unprecedented opportunity to explore tectonic deformation over several decades and on continental scales. Despite the availability of numerous correction techniques (in this thesis I use global weather models to calculate the atmospheric contribution), atmospheric delays remain the major challenge to exploiting Sentinel-1 data for global strain mapping, the mitigation of these delays are an important goal for the InSAR community

    Dynamic aspects of DNA

    Get PDF

    Phase extraction of non-stationary signals produced in dynamic interferometry involving speckle waves

    Get PDF
    It is now widely acknowledged, among communities of researchers and engineers of very different horizons, that speckle interferometry (SI) offers powerful techniques to characterize mechanical rough surfaces with a submicronic accuracy in static or quasi-static regime, when small displacements are involved (typically several microns or tens of microns). The issue of dynamic regimes with possibly large deformations (typically several hundreds of microns) is still topical and prevents an even more widespread use of speckle techniques. This is essentially due to the lack of efficient processing schemes able to cope with non-stationary AM-FM interferometric signals. In addition, decorrelation-induced phase errors represent an hindrance to accurate measurement when such large displacements and classical fringe analysis techniques are considered. This work is an attempt to address those issues and to endeavor to make the most of speckle interferometry signals. Our answers to those problems are located on two different levels. First of all, we adopt the temporal analysis approach, i.e. the analysis of the temporal signal of each pixel of the sensor area used to record the interferograms. A return to basics of phase extraction is operated to properly identify the conditions under which the computed phase is meaningful and thus give some insight on the physical phenomenon under analysis. Due to their intrinsic non-stationary nature, a preprocessing tool is missing to put the SI temporal signals in a shape which ensures an accurate phase computation, whichever technique is chosen. This is where the Empirical Mode Decomposition (EMD) intervenes. This technique, somehow equivalent to an adaptive filtering technique, has been studied and tailored to fit with our expectations. The EMD has shown a great ability to remove efficiently the random fluctuating background intensity and to evaluate the modulation intensity. The Hilbert tranform (HT) is the natural quadrature operator. Its use to build an analytical signal from the so-detrended SI signal, for subsequent phase computation, has been studied and assessed. Other phase extraction techniques have been considered as well for comparison purposes. Finally, our answer to the decorrelation-induced phase error relies on the well-known result that the higher the pixel modulation intensity, the lower the random phase error. We took benefit from this result – not only linked to basic SNR considerations, but more specifically to the intrinsic phase structure of speckle fields – with a novel approach. The regions within the pixel signal history classified as unreliable because under-modulated, are purely and simply discarded. An interpolation step with the Delaunay triangulation is carried out with the so-obtained non-uniformly sampled phase maps to recover a smooth phase which relies on the most reliable available data. Our schemes have been tested and discussed with simulated and experimental SI signals. We eventually have developed a versatile, accurate and efficient phase extraction procedure, perfectly able to tackle the challenge of dynamic behaviors characterization, even for displacements and/or deformations beyond the classical limit of the correlation dimensions

    Multichannel source separation and tracking with phase differences by random sample consensus

    Get PDF
    Blind audio source separation (BASS) is a fascinating problem that has been tackled from many different angles. The use case of interest in this thesis is that of multiple moving and simultaneously-active speakers in a reverberant room. This is a common situation, for example, in social gatherings. We human beings have the remarkable ability to focus attention on a particular speaker while effectively ignoring the rest. This is referred to as the ``cocktail party effect'' and has been the holy grail of source separation for many decades. Replicating this feat in real-time with a machine is the goal of BASS. Single-channel methods attempt to identify the individual speakers from a single recording. However, with the advent of hand-held consumer electronics, techniques based on microphone array processing are becoming increasingly popular. Multichannel methods record a sound field from various locations to incorporate spatial information. If the speakers move over time, we need an algorithm capable of tracking their positions in the room. For compact arrays with 1-10 cm of separation between the microphones, this can be accomplished by applying a temporal filter on estimates of the directions-of-arrival (DOA) of the speakers. In this thesis, we review recent work on BSS with inter-channel phase difference (IPD) features and provide extensions to the case of moving speakers. It is shown that IPD features compose a noisy circular-linear dataset. This data is clustered with the RANdom SAmple Consensus (RANSAC) algorithm in the presence of strong reverberation to simultaneously localize and separate speakers. The remarkable performance of RANSAC is due to its natural tendency to reject outliers. To handle the case of non-stationary speakers, a factorial wrapped Kalman filter (FWKF) and a factorial von Mises-Fisher particle filter (FvMFPF) are proposed that track source DOAs directly on the unit circle and unit sphere, respectively. These algorithms combine directional statistics, Bayesian filtering theory, and probabilistic data association techniques to track the speakers with mixtures of directional distributions

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Diversité et traitements non-linéaires pour les récepteurs modernes

    Get PDF
    Depuis le doctorat, les travaux de recherche auxquels j'ai contribué ont porté essentiellement sur des problèmes d'estimation d'un signal d'intérêt noyé dans du bruit. Les domaines d'application visés sont majoritairement le radar, mais aussi le GNSS et l'imagerie ultrasonore. Bien que différents, ces domaines sont soumis à des tendances similaires qui caractérisent ou caractériseront certainement les récepteurs modernes. En effet, les enjeux applicatifs requièrent de repousser sans cesse les limites de performance des traitements : le radariste cherche à détecter des petites cibles dans des environnements de plus en plus difficiles ; en GNSS, des solutions de positionnement haute précision sont recherchées dans des milieux très contraints tels les canyons urbains ; en imagerie médicale, une qualité accrue des images est recherchée pour améliorer les diagnostics, pour ne citer que quelques exemples. Parmi les tendances qui permettront de repousser les performances des récepteurs modernes, deux sont particulièrement présentes dans les travaux conduits jusqu'ici : la diversité des signaux et les traitements non linéaires. Le document illustre ceci en se focalisant sur deux des thématiques de recherche conduites jusqu’ici, à savoir « Le traitement du signal pour des radars de détection à large bande instantanée » et « La poursuite robuste de la phase d'un signal GNSS multifréquence ». Pour conclure, les perspectives de recherche d’un point de vue méthodologique et applicatif sont discutées

    3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets

    Get PDF
    This research deals with the issues concerning the processing, managing, representation for further dissemination of the big amount of 3D data today achievable and storable with the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused on the optimization process applied to 3D photogrammetric data of Cultural Heritage assets. Modern Geomatic techniques enable the acquisition and storage of a big amount of data, with high metric and radiometric accuracy and precision, also in the very close range field, and to process very detailed 3D textured models. Nowadays, the photogrammetric pipeline has well-established potentialities and it is considered one of the principal technique to produce, at low cost, detailed 3D textured models. The potentialities offered by high resolution and textured 3D models is today well-known and such representations are a powerful tool for many multidisciplinary purposes, at different scales and resolutions, from documentation, conservation and restoration to visualization and education. For example, their sub-millimetric precision makes them suitable for scientific studies applied to the geometry and materials (i.e. for structural and static tests, for planning restoration activities or for historical sources); their high fidelity to the real object and their navigability makes them optimal for web-based visualization and dissemination applications. Thanks to the improvement made in new visualization standard, they can be easily used as visualization interface linking different kinds of information in a highly intuitive way. Furthermore, many museums look today for more interactive exhibitions that may increase the visitors’ emotions and many recent applications make use of 3D contents (i.e. in virtual or augmented reality applications and through virtual museums). What all of these applications have to deal with concerns the issue deriving from the difficult of managing the big amount of data that have to be represented and navigated. Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them difficult to be handled by common and portable devices, published on the internet or managed in real time applications. Even though recent advances produce more and more sophisticated and capable hardware and internet standards, empowering the ability to easily handle, visualize and share such contents, other researches aim at define a common pipeline for the generation and optimization of 3D models with a reduced number of polygons, however able to satisfy detailed radiometric and geometric requests. iii This thesis is inserted in this scenario and focuses on the 3D modeling process of photogrammetric data aimed at their easy sharing and visualization. In particular, this research tested a 3D models optimization, a process which aims at the generation of Low Polygons models, with very low byte file size, processed starting from the data of High Poly ones, that nevertheless offer a level of detail comparable to the original models. To do this, several tools borrowed from the game industry and game engine have been used. For this test, three case studies have been chosen, a modern sculpture of a contemporary Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont- Italy). All the test cases have been surveyed by means of a close range photogrammetric acquisition and three high detailed 3D models have been generated by means of a Structure from Motion and image matching pipeline. On the final High Poly models generated, different optimization and decimation tools have been tested with the final aim to evaluate the quality of the information that can be extracted by the final optimized models, in comparison to those of the original High Polygon one. This study showed how tools borrowed from the Computer Graphic offer great potentialities also in the Cultural Heritage field. This application, in fact, may meet the needs of multipurpose and multiscale studies, using different levels of optimization, and this procedure could be applied to different kind of objects, with a variety of different sizes and shapes, also on multiscale and multisensor data, such as buildings, architectural complexes, data from UAV surveys and so on

    A Parametric Sound Object Model for Sound Texture Synthesis

    Get PDF
    This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are considered. Four essential processing steps for sound texture analysis are identifi ed, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fi xed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of parameters over time. In contrast to standard spectral modeling techniques, this representation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of diff erent length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of di fferent sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modulation, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations. In contrast to many other forms of sound encoding, the parametric model facilitates various techniques of machine learning and intelligent processing, including sound clustering and principal component analysis. Strengths and weaknesses of the proposed method are reviewed, and possibilities for future development are discussed

    Resolving Measurement Errors Inherent with Time-of-Flight Range Imaging Cameras

    Get PDF
    Range imaging cameras measure the distance to objects in the field-of-view (FoV) of the camera, these cameras enable new machine vision applications in robotics, manufacturing, and human computer interaction. Time-of-flight (ToF) range cameras operate by illuminating the scene with amplitude modulated continuous wave (AMCW) light and measuring the phase difference between the emitted and reflected modulation envelope. Currently ToF range cameras suffer from measurement errors that are highly scene dependent, and these errors limit the accuracy of the depth measurement. The major cause of measurement errors is multiple propagation paths from the light source to pixel, known as multi path interference. Multi-path interference typically arises from: inter reflections, lens flare, subsurface scattering, volumetric scattering, and translucent objects. This thesis contributes three novel methods for resolving multi-path interference: coding in time, coding in frequency, and coding in space. Time coding is implemented by replacing the single frequency amplitude modulation with a binary sequence. Fundamental to ToF range cameras is the cross-correlation between the reflected light and a reference signal. The measured cross-correlation depends on the selection of the binary sequence. With selection of an appropriate binary sequence and using sparse deconvolution on the measured cross-correlation the multiple return path lengths and their amplitudes can be recovered. However, the minimal resolvable path length is dependent on the highest frequency in the binary sequence. Frequency coding is implemented by taking multiple measurements at different modulation frequencies. A subset of frequency coding is operating the camera in a mode analogous to stepped frequency continuous wave (SFCW). Frequency coding uses techniques from radar to resolve multiple propagation paths. The minimal resolvable path length is dependent on the camera's modulation bandwidth and the spectrum estimation technique used to recover distance, and it is shown that SFCW can be used to measure depth of objects behind a translucent sheet, while AMCW measurements can not. Path lengths below quarter a wavelength of the highest modulation frequency are difficult to resolve. The use of spatial coding is used to resolve diffuse multi-path interference. The original technique comes from direct and global separation in computer graphics, and it is modified to operate on the complex data produced by a ToF range camera. By illuminating the scene with a pattern the illuminated areas contain the direct return and the scattering (global return). The non-illuminated regions contain the scattering return, assuming the global component is spatially smooth. The direct and global separation with sinusoidal patterns is combining with the sinusoidal modulation signal of ToF range cameras for a closed form solution to multi-path interference in nine frames. With nine raw frames it is possible to implement direct and global separation at video frame rates. The RMSE of a corner is reduced from 0.0952 m to 0.0112 m. Direct and global separation correctly measures the depth of a diffuse corner, and resolves subsurface scattering however fails to resolve specular reflections. Finally the direct and global separation is combined with replacing the illumination and reference signals with a binary sequence. The combination allows for resolving diffuse multi-path interference present in a corner, with the sparse multi-path interference caused mixed pixels between the foreground and background. The corner is correctly measured and the number of mixed pixels is reduced by 90%. With the development of new methods to resolve multi-path interference ToF range cameras can measure scenes with more confidence. ToF range cameras can be built into small form factors as they require a small number of parts: a pixel array, a light source and a lens. The small form factor coupled with accurate range measurements allows ToF range cameras to be embedded in cellphones and consumer electronic devices, enabling wider adoption and advantages over competing range imaging technologies
    corecore