158 research outputs found
Spin-Glass Model Governs Laser Multiple Filamentation
We show that multiple filamentation patterns in high-power laser beams, can
be described by means of two statistical physics concepts, namely
self-similarity of the patterns over two nested scales, and nearest-neighbor
interactions of classical rotators. The resulting lattice spin model perfectly
reproduces the evolution of intense laser pulses as simulated by the Non-Linear
Schr\"odinger Equation, shedding a new light on multiple filamentation. As a
side benefit, this approach drastically reduces the computing time by two
orders of magnitude as compared to the standard simulation methods of laser
filamentation.Comment: 8 pages, 4 figure
Comparison of CELP speech coder with a wavelet method
This thesis compares the speech quality of Code Excited Linear Predictor (CELP, Federal Standard 1016) speech coder with a new wavelet method to compress speech. The performances of both are compared by performing subjective listening tests. The test signals used are clean signals (i.e. with no background noise), speech signals with room noise and speech signals with artificial noise added. Results indicate that for clean signals and signals with predominantly voiced components the CELP standard performs better than the wavelet method but for signals with room noise the wavelet method performs much better than the CELP. For signals with artificial noise added, the results are mixed depending on the level of artificial noise added with CELP performing better for low level noise added signals and the wavelet method performing better for higher noise levels
Lossless and low-cost integer-based lifting wavelet transform
Discrete wavelet transform (DWT) is a powerful tool for analyzing real-time signals, including aperiodic, irregular, noisy, and transient data, because of its capability to explore signals in both the frequency- and time-domain in different resolutions. For this reason, they are used extensively in a wide number of applications in image and signal processing. Despite the wide usage, the implementation of the wavelet transform is usually lossy or computationally complex, and it requires expensive hardware. However, in many applications, such as medical diagnosis, reversible data-hiding, and critical satellite data, lossless implementation of the wavelet transform is desirable. It is also important to have more hardware-friendly implementations due to its recent inclusion in signal processing modules in system-on-chips (SoCs).
To address the need, this research work provides a generalized implementation of a wavelet transform using an integer-based lifting method to produce lossless and low-cost architecture while maintaining the performance close to the original wavelets. In order to achieve a general implementation method for all orthogonal and biorthogonal wavelets, the Daubechies wavelet family has been utilized at first since it is one of the most widely used wavelets and based on a systematic method of construction of compact support orthogonal wavelets. Though the first two phases of this work are for Daubechies wavelets, they can be generalized in order to apply to other wavelets as well. Subsequently, some techniques used in the primary works have been adopted and the critical issues for achieving general lossless implementation have solved to propose a general lossless method.
The research work presented here can be divided into several phases. In the first phase, low-cost architectures of the Daubechies-4 (D4) and Daubechies-6 (D6) wavelets have been derived by applying the integer-polynomial mapping. A lifting architecture has been used which reduces the cost by a half compared to the conventional convolution-based approach. The application of integer-polynomial mapping (IPM) of the polynomial filter coefficient with a floating-point value further decreases the complexity and reduces the loss in signal reconstruction. Also, the “resource sharing” between lifting steps results in a further reduction in implementation costs and near-lossless data reconstruction.
In the second phase, a completely lossless or error-free architecture has been proposed for the Daubechies-8 (D8) wavelet. Several lifting variants have been derived for the same wavelet, the integer mapping has been applied, and the best variant is determined in terms of performance, using entropy and transform coding gain. Then a theory has been derived regarding the impact of scaling steps on the transform coding gain (GT). The approach results in the lowest cost lossless architecture of the D8 in the literature, to the best of our knowledge. The proposed approach may be applied to other orthogonal wavelets, including biorthogonal ones to achieve higher performance.
In the final phase, a general algorithm has been proposed to implement the original filter coefficients expressed by a polyphase matrix into a more efficient lifting structure. This is done by using modified factorization, so that the factorized polyphase matrix does not include the lossy scaling step like the conventional lifting method. This general technique has been applied on some widely used orthogonal and biorthogonal wavelets and its advantages have been discussed.
Since the discrete wavelet transform is used in a vast number of applications, the proposed algorithms can be utilized in those cases to achieve lossless, low-cost, and hardware-friendly architectures
Modelling of spectroscopic and structural properties using molecular dynamics
The work described here was carried out at the European Lab. for Non-Linear Spectroscopy (LENS) to achieve a better understanding of molecular vibrations employing computer simulations. H-bonds are the main intermolecular interactions affecting vibrational spectra and here it’s shown how they usually induce a (red or blue) shift on the vibrational frequencies of the groups engaged in them, and how this shift nicely correlates with structural properties. H-bonds can be present also in a bifurcated arrangement. In systems such as confined water, this bifurcated configuration has long lifetimes, allowing it to be studied by both spectroscopic and computational means. The computational protocols implemented and adopted here allow for a direct comparison between structural features and vibrational spectra
The sporadic nature of meridional heat transport in the atmosphere
The present study analyses meridional atmospheric heat transport, due to transient eddies, in the European Centre for Medium-Range Weather Forecasts ERA-Interim reanalysis data. Probability density functions of the transport highlight the dominant role played by extreme events. In both hemispheres, events in the top 5 percentiles typically account for over half of the net poleward transport. As a result of this sensitivity to extremes, a large fraction of the heat transport by transient eddies, at a given location and season, is realised through randomly spaced bursts (a few per season), rather than through a continuum of events.
Abstract Fast growing atmospheric modes are associated with a large heat transport, suggesting a link between these bursts and growing baroclinic systems (defined here as motions in the 2.5–6 day band). However, wavelet power spectra of the transport extremes suggest that they are driven by very precise phase and coherence relationships, between meridional velocity and moist static energy anomalies, acting over a broad range of frequencies (2-32 days). Motions with periods beyond 6 days play a key role in this framework. Moreover, these longer periods are found to be mainly driven by planetary-scale motions. Notwithstanding this, the heat transport bursts can be matched to specific synoptic-scale patterns. The bursts are therefore interpreted as the signatures of travelling synoptic systems superimposed on larger scale motions.
The dominant role of extreme events can be reproduced in highly idealised simulations. Both a statistical model, where atmospheric motions are assumed to be linear superpositions of sinusoidal curves, and a two-layer model, representing heat transport as a quantised process effected by point vorticity anomalies, are successful in simulating the transport bursts. The fact that two very different idealised models both reproduce the transport's sporadic nature suggests that this must be an intrinsic property of waves in the atmosphere.Open Acces
Recommended from our members
LES validation of urban flow, part II: eddy statistics and flow structures
Time-dependent three-dimensional numerical simulations such as large-eddy simulation (LES) play an important role in fundamental research and practical applications in meteorology and wind engineering. Whether these simulations provide a sufficiently accurate picture of the time-dependent structure of the flow, however, is often not determined in enough detail. We propose an application-specific validation procedure for LES that focuses on the time dependent nature of mechanically induced shear-layer turbulence to derive information about strengths and limitations of the model. The validation procedure is tested for LES of turbulent flow in a complex city, for which reference data from wind-tunnel experiments are available. An initial comparison of mean flow statistics and frequency distributions was presented in part I. Part II focuses on comparing eddy statistics and flow structures. Analyses of integral time scales and auto-spectral energy densities show that the tested LES reproduces the temporal characteristics of energy-dominant and flux-carrying eddies accurately. Quadrant analysis of the vertical turbulent momentum flux reveals strong similarities between instantaneous ejection-sweep patterns in the LES and the laboratory flow, also showing comparable occurrence statistics of rare but strong flux events. A further comparison of wavelet-coefficient frequency distributions and associated high-order statistics reveals a strong agreement of location-dependent intermittency patterns induced by resolved eddies in the energy-production range. The validation concept enables wide-ranging conclusions to be drawn about the skill of turbulence-resolving simulations than the traditional approach of comparing only mean flow and turbulence statistics. Based on the accuracy levels determined, it can be stated that the tested LES is sufficiently accurate for its purpose of generating realistic urban wind fields that can be used to drive simpler dispersion models
Investigating the relationships between precipitable water vapor estimations and heavy rainfall over the Eastern Pacific Ocean and Ecuadorian regions
La lluvia es un fenómeno atmosférico difícil de predecir. Más allá de su importancia en las
actividades humanas; existen dificultades teóricas y técnicas que justifican el estudio de la
lluvia y la lluvia intensa. Los Modelos Numéricos atmosféricos de Predicción como el Weather
Research & Forecasting Model (WRF), son las herramientas que se utilizan para predecir y
estudiar su comportamiento, aunque presentan limitaciones al trabajar con lluvias intensas y
topografías complejas y empinadas. Recientes investigaciones proponen a la estimación
vapor de agua troposférico (Precipitable Water Vapor PWV), como una herramienta que
puede ayudar a la predicción y entendimiento de los mecanismos que desencadenan lluvia
intensa. Productos satelitales y su derivación indirecta a través del retraso de señales de
Sistemas de Posicionamiento Global GNSS, son las principales fuentes actuales de PWV. El
presente trabajo estudia la relación entre la lluvia intensa y el PWV satelital sobre el océano,
la relación de PWV-GNSS sobre la Costa, Sierra y Oriente del Ecuador; así como con los
datos modelados en WRF sobre zonas andinas ecuatoriales. Como principales resultados,
se tiene un modelo empírico entre el PWV satelital y los valores máximos de lluvia sobre el
océano; además, se identifican períodos de carga y descarga del PWV-GNSS relacionados
con el ciclo diurno de la lluvia sobre tierra, y relaciones con los eventos intensos de lluvia; y
por último, se encuentran las principales discrepancias entre los datos observados PWV-
GNSS y lluvia con datos modelados de WRF sobre zonas de los Andes Ecuatoriales.Among the weather phenomena, rainfall is difficult to forecast, despite the theoretical and
technical challenges inherently related to its prediction, its impact in economic and everyday
activities, clearly justify its study. Numerical Weather Prediction Models are widely used to
predict rainfall, such as the Weather Research & Forecasting Model (WRF), However, they
underperform when is set to predict intense events and when working with complex and steep
topographies. Recent studies have proposed the estimation of Precipitable Water Vapor PWV,
as a tool that can help predict and understand the mechanisms that trigger intense rainfall.
PWV is mainly sourced from satellite products and from indirectly measurements which derive
it through the delay of the Global Navigation Positioning System (GNSS) signals quite
accurately. Thus, the present work studies the relationship between intense rain and satellite
sourced PWV over the ocean, the relationship of PWV-GNSS over the Coast, Sierra and
Amazon of Ecuador, and the comparison of the PWV-GNSS with the data modeled in WRF.
As main results, we point an empirical model between the satellite PWV and the maximum
values of rainfall over the ocean. In addition, PWV-GNSS loading and unloading periods
related to the diurnal cycle of rainfall over the land, and relationships with intense rain events
were identified; and finally, the main discrepancies between the observed PWV-GNSS data
and rainfall with WRF modeled data over areas of the Equatorial Andes.0000-0002-4496-73230000-0002-4408-85800000-0002-7205-5786Doctora (PhD) en Recursos HídricosCuenc
Non-linear structures as probes of the cosmological standard model
Non-linear structures provide an important test of the cosmological standard model. In this thesis, we investigate both analytic approaches to describing statistical properties of cosmic non-linear structures and a comparison of observational with simulated data.
In the first part, we focus on analytic derivations in the framework of kinetic field theory (KFT), a novel theory to cosmic structure formation based on statistical field theory of classical particles. We investigate ways to derive the probability density function (PDF) of the cosmic density field within this framework. For this purpose, we introduce different models and explore approaches to derive the density PDF from the generating functional of KFT directly.
We then use parts of these results in order to obtain an analytic derivation of the halo mass function. Unlike the standard approach, we derive the halo mass function from the present day non-linear density field directly. We use two models of the density PDF for this purpose, the lognormal and the generalised normal distribution, and fix their parameters by the predictions of KFT. We then derive the halo mass function using excursion set theory with correlated random walks. We obtain a closed form expression for the halo mass function, with only one free parameter, i.e. the halo overdensity Delta. For a choice of Delta = 2.9, our results agree well with those of simulations.
In the last part, we investigate a concrete example of non-linear structure, i.e. the substructure distribution in the massive galaxy cluster Abell 2744. We compare it to that of haloes of the Millennium XXL simulation in order to test its compatibility with the cosmological standard model LambdaCDM. We identify structures in both the mass map of Abell 2744 and comparable mass maps of the MXXL haloes by a method based on the wavelet transform. This allows us to find three haloes in the MXXL simulation with a substructure distribution similar to Abell 2744 thus corroborating its concordance with LambdaCDM. We add a thorough discussion of our results and put them into context with the findings of other recent works
Recommended from our members
Optophone design: optical-to-auditory vision substitution for the blind
An optophone is a device that turns light into sound for the benefit of blind people. The present project is intended to produce a general-purpose optophone to be worn on the head about the house and in the street, to give the wearer a detailed description in sound of the'scene he is facing. The device will therefore consist'of an'electronic camera, some signal-processing electronics, earphones`, and a battery. The two major problems are the derivation of (a) the most suitable mapping from images to sounds, and (b) an algorithm to perform the mapping in real'time on existing electronic components. This thesis concerns problem (a). Chapter 2 goes into the general scene-to-sound mapping problem in some detail'and presents the work of earlier investigators. Chapter 3 1- discusses the design of tests to evaluate the performance of candidate mappings. A theoretical performance test (TPT) is derived. Chapter 4 applies the TPT to the most obvious mapping, the cartesian piano transform. Chapter 5 applies the TPT to a mapping based on the cosine transform. Chapter 6 attempts to derive a mapping by principal component analysis, using the inaccuracies of human sight and hearing and the statistical properties of real scenes and sounds. Chapter 7 presents a complete scheme, implemented in software, for representing digitised colour scenes by audible digitised stereo sound. Chapter 8 tries to decide how'many numbers are required to specify a steady spectrum with no noticeable degradation. Chapter 9 looks'at a scheme designed to produce more natural-sounding sounds related to more meaningful portions of the scene. This scheme maps windows in the scene to steady spectral patterns of short duration, the location of the window being conveyed by simulated free-field listening. Chapter 10 gives detailed recommendations as to further work
ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер
For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений
- …