329 research outputs found
Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion
We introduce a technique to compute exact anelastic sensitivity kernels in
the time domain using parsimonious disk storage. The method is based on a
reordering of the time loop of time-domain forward/adjoint wave propagation
solvers combined with the use of a memory buffer. It avoids instabilities that
occur when time-reversing dissipative wave propagation simulations. The total
number of required time steps is unchanged compared to usual acoustic or
elastic approaches. The cost is reduced by a factor of 4/3 compared to the case
in which anelasticity is partially accounted for by accommodating the effects
of physical dispersion. We validate our technique by performing a test in which
we compare the sensitivity kernel to the exact kernel obtained by
saving the entire forward calculation. This benchmark confirms that our
approach is also exact. We illustrate the importance of including full
attenuation in the calculation of sensitivity kernels by showing significant
differences with physical-dispersion-only kernels
Recommended from our members
Stochastic dynamics and wavelets techniques for system response analysis and diagnostics: Diverse applications in structural and biomedical engineering
In the first part of the dissertation, a novel stochastic averaging technique based on a Hilbert transform definition of the oscillator response displacement amplitude is developed. In comparison to standard stochastic averaging, the requirement of “a priori” determination of an equivalent natural frequency is bypassed, yielding flexibility in the ensuing analysis and potentially higher accuracy. Further, the herein proposed Hilbert transform based stochastic averaging is adapted for determining the time-dependent survival probability and first-passage time probability density function of stochastically excited nonlinear oscillators, even endowed with fractional derivative terms. To this aim, a Galerkin scheme is utilized to solve approximately the backward Kolmogorov partial differential equation governing the survival probability of the oscillator response. Next, the potential of the stochastic averaging technique to be used in conjunction with performance-based engineering design applications is demonstrated by proposing a stochastic version of the widely used incremental dynamic analysis (IDA). Specifically, modeling the excitation as a non-stationary stochastic process possessing an evolutionary power spectrum (EPS), an approximate closed-form expression is derived for the parameterized oscillator response amplitude probability density function (PDF). In this regard, IDA surfaces are determined providing the conditional PDF of the engineering demand parameter (EDP) for a given intensity measure (IM) value. In contrast to the computationally expensive Monte Carlo simulation, the methodology developed herein determines the IDA surfaces at minimal computational cost.
In the second part of the dissertation, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Several numerical examples are considered for assessing the reliability of the technique, even in the presence of incomplete and corrupted data. These include a 2-DOF time-variant Duffing oscillator endowed with fractional derivative terms, as well as a 2-DOF system subject to flow-induced forces where the non-stationary sea state possesses a recently proposed evolutionary version of the JONSWAP spectrum.
In the third part of this dissertation, a joint time-frequency analysis technique based on generalized harmonic wavelets (GHWs) is developed for dynamic cerebral autoregulation (DCA) performance quantification. DCA is the continuous counter-regulation of the cerebral blood flow by the active response of cerebral blood vessels to the spontaneous or induced blood pressure fluctuations. Specifically, various metrics of the phase shift and magnitude of appropriately defined GHW-based transfer functions are determined based on data points over the joint time-frequency domain. The potential of these metrics to be used as a diagnostics tool for indicating healthy versus impaired DCA function is assessed by considering both healthy individuals and patients with unilateral carotid artery stenosis. Next, another application in biomedical engineering is pursued related to the Pulse Wave Imaging (PWI) technique. This relies on ultrasonic signals for capturing the propagation of pressure pulses along the carotid artery, and eventually for prognosis of focal vascular diseases (e.g., atherosclerosis and abdominal aortic aneurysm). However, to obtain a high spatio-temporal resolution the data are acquired at a high rate, in the order of kilohertz, yielding large datasets. To address this challenge, an efficient data compression technique is developed based on the multiresolution wavelet decomposition scheme, which exploits the high correlation of adjacent RF-frames generated by the PWI technique. Further, a sparse matrix decomposition is proposed as an efficient way to identify the boundaries of the arterial wall in the PWI technique
Planetary Interiors
This report identifies two main themes to guide planetary science in the next two decades: understanding planetary origins, and understanding the constitution and fundamental processes of the planets themselves. Within the latter theme, four specific goals related to interior measurements addressing the theme. These are: (1) Understanding the internal structure and dynamics of at least one solid body, other than the Earth or Moon, that is actively convecting, (2) Determine the characteristics of the magnetic fields of Mercury and the outer planets to provide insight into the generation of planetary magnetic fields, (3) Specify the nature and sources of stress that are responsible for the global tectonics of Mars, Venus, and several icy satellites of the outer planets, and (4) Advance significantly our understanding of crust-mantle structure for all the solid planets. These goals can be addressed almost exclusively by measurements made on the surfaces of planetary bodies
An overview on structural health monitoring: From the current state-of-the-art to new bio-inspired sensing paradigms
In the last decades, the field of structural health monitoring (SHM) has grown exponentially. Yet, several technical constraints persist, which are preventing full realization of its potential. To upgrade current state-of-the-art technologies, researchers have started to look at nature’s creations giving rise to a new field called ‘biomimetics’, which operates across the border between living and non-living systems. The highly optimised and time-tested performance of biological assemblies keeps on inspiring the development of bio-inspired artificial counterparts that can potentially outperform conventional systems. After a critical appraisal on the current status of SHM, this paper presents a review of selected works related to neural, cochlea and immune-inspired algorithms implemented in the field of SHM, including a brief survey of the advancements of bio-inspired sensor technology for the purpose of SHM. In parallel to this engineering progress, a more in-depth understanding of the most suitable biological patterns to be transferred into multimodal SHM systems is fundamental to foster new scientific breakthroughs. Hence, grounded in the dissection of three selected human biological systems, a framework for new bio-inspired sensing paradigms aimed at guiding the identification of tailored attributes to transplant from nature to SHM is outlined.info:eu-repo/semantics/acceptedVersio
Compression Methods for Structured Floating-Point Data and their Application in Climate Research
The use of new technologies, such as GPU boosters, have led to a dramatic
increase in the computing power of High-Performance Computing (HPC)
centres. This development, coupled with new climate models that can better
utilise this computing power thanks to software development and internal
design, led to the bottleneck moving from solving the differential equations
describing Earth’s atmospheric interactions to actually storing the variables.
The current approach to solving the storage problem is inadequate: either
the number of variables to be stored is limited or the temporal resolution
of the output is reduced. If it is subsequently determined that another vari-
able is required which has not been saved, the simulation must run again.
This thesis deals with the development of novel compression algorithms
for structured floating-point data such as climate data so that they can be
stored in full resolution.
Compression is performed by decorrelation and subsequent coding of
the data. The decorrelation step eliminates redundant information in the
data. During coding, the actual compression takes place and the data is
written to disk. A lossy compression algorithm additionally has an approx-
imation step to unify the data for better coding. The approximation step
reduces the complexity of the data for the subsequent coding, e.g. by using
quantification. This work makes a new scientific contribution to each of the
three steps described above.
This thesis presents a novel lossy compression method for time-series
data using an Auto Regressive Integrated Moving Average (ARIMA) model
to decorrelate the data. In addition, the concept of information spaces and
contexts is presented to use information across dimensions for decorrela-
tion. Furthermore, a new coding scheme is described which reduces the
weaknesses of the eXclusive-OR (XOR) difference calculation and achieves
a better compression factor than current lossless compression methods for
floating-point numbers. Finally, a modular framework is introduced that
allows the creation of user-defined compression algorithms.
The experiments presented in this thesis show that it is possible to in-
crease the information content of lossily compressed time-series data by
applying an adaptive compression technique which preserves selected data
with higher precision. An analysis for lossless compression of these time-
series has shown no success. However, the lossy ARIMA compression model
proposed here is able to capture all relevant information. The reconstructed
data can reproduce the time-series to such an extent that statistically rele-
vant information for the description of climate dynamics is preserved.
Experiments indicate that there is a significant dependence of the com-
pression factor on the selected traversal sequence and the underlying data
model. The influence of these structural dependencies on prediction-based
compression methods is investigated in this thesis. For this purpose, the
concept of Information Spaces (IS) is introduced. IS contributes to improv-
ing the predictions of the individual predictors by nearly 10% on average.
Perhaps more importantly, the standard deviation of compression results is
on average 20% lower. Using IS provides better predictions and consistent
compression results.
Furthermore, it is shown that shifting the prediction and true value leads
to a better compression factor with minimal additional computational costs.
This allows the use of more resource-efficient prediction algorithms to
achieve the same or better compression factor or higher throughput during
compression or decompression. The coding scheme proposed here achieves
a better compression factor than current state-of-the-art methods.
Finally, this paper presents a modular framework for the development
of compression algorithms. The framework supports the creation of user-
defined predictors and offers functionalities such as the execution of bench-
marks, the random subdivision of n-dimensional data, the quality evalua-
tion of predictors, the creation of ensemble predictors and the execution of
validity tests for sequential and parallel compression algorithms.
This research was initiated because of the needs of climate science, but
the application of its contributions is not limited to it. The results of this the-
sis are of major benefit to develop and improve any compression algorithm
for structured floating-point data
Secure and efficient storage of multimedia: content in public cloud environments using joint compression and encryption
The Cloud Computing is a paradigm still with many unexplored areas ranging from the
technological component to the de nition of new business models, but that is revolutionizing the way we design, implement and manage the entire infrastructure of information technology.
The Infrastructure as a Service is the delivery of computing infrastructure, typically a virtual data center, along with a set of APIs that allow applications, in an automatic way, can control the resources they wish to use. The choice of the service provider and how it applies to their business model may lead to higher or lower cost in the operation and maintenance of applications near the suppliers.
In this sense, this work proposed to carry out a literature review on the topic of Cloud
Computing, secure storage and transmission of multimedia content, using lossless compression, in public cloud environments, and implement this system by building an application that manages data in public cloud environments (dropbox and meocloud).
An application was built during this dissertation that meets the objectives set. This system provides the user a wide range of functions of data management in public cloud environments, for that the user only have to login to the system with his/her credentials, after performing the login, through the Oauth 1.0 protocol (authorization protocol) is generated an access token, this token is generated only with the consent of the user and allows the application to get access to data/user les without having to use credentials. With this token the framework can now operate and unlock the full potential of its functions. With this application
is also available to the user functions of compression and encryption so that user can make the most of his/her cloud storage system securely. The compression function works using the compression algorithm LZMA being only necessary for the user to choose the les to be compressed.
Relatively to encryption it will be used the encryption algorithm AES (Advanced Encryption Standard) that works with a 128 bit symmetric key de ned by user.
We build the research into two distinct and complementary parts: The rst part consists
of the theoretical foundation and the second part is the development of computer application where the data is managed, compressed, stored, transmitted in various environments of cloud computing. The theoretical framework is organized into two chapters, chapter 2 - Background
on Cloud Storage and chapter 3 - Data compression.
Sought through theoretical foundation demonstrate the relevance of the research, convey some of the pertinent theories and input whenever possible, research in the area. The second part of the work was devoted to the development of the application in cloud environment.
We showed how we generated the application, presented the features, advantages, and
safety standards for the data. Finally, we re ect on the results, according to the theoretical
framework made in the rst part and platform development.
We think that the work obtained is positive and that ts the goals we set ourselves
to achieve. This research has some limitations, we believe that the time for completion was scarce and the implementation of the platform could bene t from the implementation of other features.In future research it would be appropriate to continue the project expanding the capabilities
of the application, test the operation with other users and make comparative tests.A Computação em nuvem é um paradigma ainda com muitas áreas por explorar que
vão desde a componente tecnológica à definição de novos modelos de negócio, mas que está
a revolucionar a forma como projetamos, implementamos e gerimos toda a infraestrutura da
tecnologia da informação.
A Infraestrutura como Serviço representa a disponibilização da infraestrutura computacional,
tipicamente um datacenter virtual, juntamente com um conjunto de APls que permitirá
que aplicações, de forma automática, possam controlar os recursos que pretendem utilizar_ A
escolha do fornecedor de serviços e a forma como este aplica o seu modelo de negócio poderão
determinar um maior ou menor custo na operacionalização e manutenção das aplicações junto
dos fornecedores.
Neste sentido, esta dissertação propôs· se efetuar uma revisão bibliográfica sobre a
temática da Computação em nuvem, a transmissão e o armazenamento seguro de conteúdos
multimédia, utilizando a compressão sem perdas, em ambientes em nuvem públicos, e implementar
um sistema deste tipo através da construção de uma aplicação que faz a gestão dos
dados em ambientes de nuvem pública (dropbox e meocloud).
Foi construída uma aplicação no decorrer desta dissertação que vai de encontro aos objectivos
definidos. Este sistema fornece ao utilizador uma variada gama de funções de gestão
de dados em ambientes de nuvem pública, para isso o utilizador tem apenas que realizar o login
no sistema com as suas credenciais, após a realização de login, através do protocolo Oauth 1.0
(protocolo de autorização) é gerado um token de acesso, este token só é gerado com o consentimento
do utilizador e permite que a aplicação tenha acesso aos dados / ficheiros do utilizador
~em que seja necessário utilizar as credenciais. Com este token a aplicação pode agora operar e
disponibilizar todo o potencial das suas funções. Com esta aplicação é também disponibilizado
ao utilizador funções de compressão e encriptação de modo a que possa usufruir ao máximo
do seu sistema de armazenamento cloud com segurança. A função de compressão funciona
utilizando o algoritmo de compressão LZMA sendo apenas necessário que o utilizador escolha os
ficheiros a comprimir. Relativamente à cifragem utilizamos o algoritmo AES (Advanced Encryption
Standard) que funciona com uma chave simétrica de 128bits definida pelo utilizador.
Alicerçámos a investigação em duas partes distintas e complementares: a primeira parte
é composta pela fundamentação teórica e a segunda parte consiste no desenvolvimento da aplicação
informática em que os dados são geridos, comprimidos, armazenados, transmitidos em
vários ambientes de computação em nuvem. A fundamentação teórica encontra-se organizada
em dois capítulos, o capítulo 2 - "Background on Cloud Storage" e o capítulo 3 "Data Compression",
Procurámos, através da fundamentação teórica, demonstrar a pertinência da investigação. transmitir algumas das teorias pertinentes e introduzir, sempre que possível, investigações
existentes na área. A segunda parte do trabalho foi dedicada ao desenvolvimento da
aplicação em ambiente "cloud". Evidenciámos o modo como gerámos a aplicação, apresentámos
as funcionalidades, as vantagens. Por fim, refletimos sobre os resultados , de acordo com o
enquadramento teórico efetuado na primeira parte e o desenvolvimento da plataforma.
Pensamos que o trabalho obtido é positivo e que se enquadra nos objetivos que nos propusemos
atingir. Este trabalho de investigação apresenta algumas limitações, consideramos que
o tempo para a sua execução foi escasso e a implementação da plataforma poderia beneficiar
com a implementação de outras funcionalidades. Em investigações futuras seria pertinente dar continuidade ao projeto ampliando as potencialidades da aplicação, testar o funcionamento
com outros utilizadores e efetuar testes comparativos.Fundação para a Ciência e a Tecnologia (FCT
Digital Image Processing
Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding.
Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application.
The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable.
Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England
Recommended from our members
Active sampling, scaling and dataset merging for large-scale image quality assessment
The field of subjective assessment is concerned with eliciting human judgements about a set of stimuli. Collecting such data is costly and time-consuming, especially when the subjective study is to be conducted in a controlled environment and using a specialized equipment. Thus, data from these studies are usually scarce. One of the areas, for which obtaining subjective measurements is difficult is image quality assessment. The results from these studies are used to develop and train automated or objective image quality metrics, which, with the advent of deep learning, require large amounts of versatile and heterogeneous data.
I present three main contributions in this dissertation. First, I propose a new active sampling method for efficient collection of pairwise comparisons in subjective assessment experiments. In these experiments observers are asked to express a preference between two conditions. However, many pairwise comparison protocols require a large number of comparisons to infer accurate scores, which may be unfeasible when each comparison is time-consuming (e.g. videos) or expensive (e.g. medical imaging). This motivates the use of an active sampling algorithm that chooses only the most informative pairs for comparison. I demonstrate, with real and synthetic data, that my algorithm offers the highest accuracy of inferred scores given a fixed number of measurements compared to the existing methods. Second, I propose a probabilistic framework to fuse the outcomes of different psychophysical experimental protocols, namely rating and pairwise comparisons experiments. Such a method can be used for merging existing datasets of subjective nature and for experiments in which both measurements are collected. Third, with a new dataset merging technique and by collecting additional cross-dataset quality comparisons I create a Unified Photometric Image Quality (UPIQ) dataset with over 4,000 images by realigning and merging existing high-dynamic-range (HDR) and standard-dynamic-range (SDR) datasets. The realigned quality scores share the same unified quality scale across all datasets. I then use the new dataset to retrain existing HDR metrics and show that the dataset is sufficiently large for training deep architectures. I show the utility of the dataset and metrics in an application to image compression that accounts for viewing conditions, including screen brightness and the viewing distance
- …