32 research outputs found

    Exploiting nonlinear propagation in echo sounders and sonar

    Get PDF
    The 10th European Conference on Underwater Acoustics (ECUA). 2010, Istanbul, Turkey. Mainstream sonars transmit and receive signals at the same frequency. As water is a nonlinear medium, a propagating signal generates harmonics at multiples of the transmitted frequency. For sonar applications, energy transferred to higher harmonics is seen as a disturbance. To satisfy requirements for calibration of echo sounders in fishery research, input power has to be limited to avoid energy loss to harmonics generation. Can these harmonics be used in sonar imaging? The frequency dependency of target echos, and the different spatial distribution of higher harmonics can contribute to additional information on detected targets in fish classification, ocean bathymetry, or bottom classification. Our starting point was the sonar equation adapted for the second harmonic. We have simulated nonlinear propagation of sound in water, and obtained estimates of received pressure levels of harmonics for a calibration sphere, or a fish as reflector. These pressure profiles were used in the sonar equation to compare harmonics to fundamental signal budget. Our results show that a 200 kHz thermal noise limited echo sounder, with a range of 800 m will reach around 300 m for the second harmonic. This means the second harmonic is useful in many applications

    Stochastic modeling of stratospheric temperature

    Full text link
    This study suggests a stochastic model for time series of daily-zonal (circumpolar) mean stratospheric temperature at a given pressure level. It can be seen as an extension of previous studies which have developed stochastic models for surface temperatures. The proposed model is a sum of a deterministic seasonality function and a L\'evy-driven multidimensional Ornstein-Uhlenbeck process, which is a mean-reverting stochastic process. More specifically, the deseasonalized temperature model is an order 4 continuous time autoregressive model, meaning that the stratospheric temperature is modeled to be directly dependent on the temperature over four preceding days, while the model's longer-range memory stems from its recursive nature. This study is based on temperature data from the European Centre for Medium-Range Weather Forecasts ERA-Interim reanalysis model product. The residuals of the autoregressive model are well-represented by normal inverse Gaussian distributed random variables scaled with a time-dependent volatility function. A monthly variability in speed of mean reversion of stratospheric temperature is found, hence suggesting a generalization of the 4th order continuous time autoregressive model. A stochastic stratospheric temperature model, as proposed in this paper, can be used in geophysical analyses to improve the understanding of stratospheric dynamics. In particular, such characterizations of stratospheric temperature may be a step towards greater insight in modeling and prediction of large-scale middle atmospheric events, such as for example sudden stratospheric warmings. Through stratosphere-troposphere coupling, the stratosphere is hence a source of extended tropospheric predictability at weekly to monthly timescales, which is of great importance in several societal and industry sectors.Comment: 23 pages, 9 figure

    Adding a low frequency limit to fractional wave propagation models

    Get PDF
    Power-law attenuation in elastic wave propagation of both compressional and shear waves can be described with multiple relaxation processes. It may be less physical to describe it with fractional calculus medium models, but this approach is useful for simulation and for parameterization where the underlying relaxation structure is very complex. It is easy to enforce a low-frequency limit on a relaxation distribution and this gives frequency squared characteristics for low frequencies which seems to fit some media in practice. Here the goal is to change the low-frequency behavior of fractional models also. This is done by tempering the relaxation moduli of the fractional Kelvin-Voigt and diffusion models with an exponential function and the effect is that the low-frequency attenuation will increase with frequency squared and the square root of frequency respectively. The time-space wave equations for the tempered models have also been found, and for this purpose the concept of the fractional pseudo-differential operator borrowed from the field of Cole-Davidson dielectrics is useful. The tempering does not remove the singularity in the relaxation moduli of the models, but this has only a minor effect on the solutions

    Tracking aftershock sequences using empirical matched field processing

    Get PDF
    Extensive aftershock sequences present a significant problem to seismological data centres attempting to produce near real-time comprehensive seismic event bulletins. An elevated number of events to process and poorer performance of automatic phase association algorithms can lead to large delays in processing and a greatly increased human workload. Global monitoring is often performed using seismic array stations at considerable distances from the events involved. Empirical matched field processing (EMFP) is a narrow-frequency band array signal processing technique that recognizes the inter-sensor phase and amplitude relations associated with wavefronts approaching a sensor array from a given direction. We demonstrate that EMFP, using a template obtained from the first P arrival from the main shock alone, can efficiently detect and identify P arrivals on that array from subsequent events in the aftershock zone with exceptionally few false alarms (signals from other sources). The empirical wavefield template encodes all the narrow-band phase and amplitude relations observed for the main shock signal. These relations are also often robust and repeatable characteristics of signals from nearby events. The EMFP detection statistic compares the phase and amplitude relations at a given time in the incoming data stream with those for the template and is sensitive to very short-duration signals with the required characteristics. Significant deviations from the plane-wavefront model that typically degrade the performance of standard beamforming techniques can enhance signal characterization using EMFP. Waveform correlation techniques typically perform poorly for aftershocks from large earthquakes due to the distances between hypocentres and the wide range of event magnitudes and source mechanisms. EMFP on remote seismic arrays mitigates these difficulties; the narrow-band nature of the procedure makes arrival identification less sensitive to the signals’ temporal form and spectral content. The empirical steering vectors derived for the main shock P arrival can reduce the frequency dependency of the slowness vector estimates. This property helps us to automatically screen out arrivals from outside of the aftershock zone. Standard array processing pipelines could be enhanced by including both plane-wave and empirical matched field steering vectors. This would maintain present capability for the plane-wave steering vectors and provide increased sensitivity and resolution for those sources for which we have empirical calibrations.Tracking aftershock sequences using empirical matched field processingacceptedVersio

    Worst-case analysis of array beampatterns using interval arithmetic

    Full text link
    Over the past decade, interval arithmetic (IA) has been utilized to determine tolerance bounds of phased array beampatterns. IA only requires that the errors of the array elements are bounded, and can provide reliable beampattern bounds even when a statistical model is missing. However, previous research has not explored the use of IA to find the error realizations responsible for achieving specific bounds. In this study, the capabilities of IA are extended by introducing the concept of ``backtracking'', which provides a direct way of addressing how specific bounds can be attained. Backtracking allows for the recovery of both the specific error realization and the corresponding beampattern, enabling the study and verification of which errors result in the worst-case array performance in terms of the peak sidelobe level. Moreover, IA is made applicable to a wider range of arrays by adding support for arbitrary array geometries with directive elements and mutual coupling, in addition to element amplitude, phase, and positioning errors. Lastly, a simple formula for approximate bounds of uniformly bounded errors is derived and numerically verified. This formula gives insights into how array size and apodization cannot reduce the worst-case peak sidelobe level beyond a certain limit.Comment: This article may be downloaded for personal use only. Any other use requires author and AIP Publishing prior permission. This article appears in The Journal of the Acoustical Society of America and may be found at https://doi.org/10.1121/10.0019715. The current e-print was typeset by the authors and can differ in, e.g., pagination, reference numbering, and typographic detai
    corecore