31 research outputs found
Combined Industry, Space and Earth Science Data Compression Workshop
The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
Wavelet Theory
The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
Realtime image noise reduction FPGA implementation with edge detection
The purpose of this dissertation was to develop and implement, in a Field
Programmable Gate Array (FPGA), a noise reduction algorithm for real-time
sensor acquired images. A Moving Average filter was chosen due to its
fulfillment of a low demanding computational expenditure nature, speed, good
precision and low to medium hardware resources utilization. The technique is
simple to implement, however, if all pixels are indiscriminately filtered, the result
will be a blurry image which is undesirable.
Since human eye is more sensitive to contrasts, a technique was
introduced to preserve sharp contour transitions which, in the author’s opinion,
is the dissertation contribution. Synthetic and real images were tested.
Synthetic, composed both with sharp and soft tone transitions, were generated
with a developed algorithm, while real images were captured with an 8-kbit
(8192 shades) high resolution sensor scaled up to 10 × 103 shades.
A least-squares polynomial data smoothing filter, Savitzky-Golay, was
used as comparison. It can be adjusted using 3 degrees of freedom ─ the
window frame length which varies the filtering relation size between pixels’
neighborhood, the derivative order, which varies the curviness and the
polynomial coefficients which change the adaptability of the curve. Moving
Average filter only permits one degree of freedom, the window frame length.
Tests revealed promising results with 2 and 4ℎ polynomial orders. Higher
qualitative results were achieved with Savitzky-Golay’s better signal
characteristics preservation, especially at high frequencies.
FPGA algorithms were implemented in 64-bit integer registers serving
two purposes: increase precision, hence, reducing the error comparatively as if
it were done in floating-point registers; accommodate the registers’ growing
cumulative multiplications. Results were then compared with MATLAB’s double
precision 64-bit floating-point computations to verify the error difference
between both. Used comparison parameters were Mean Squared Error, Signalto-Noise Ratio and Similarity coefficient.O objetivo desta dissertação foi desenvolver e implementar, em FPGA,
um algoritmo de redução de ruído para imagens adquiridas em tempo real.
Optou-se por um filtro de Média Deslizante por não exigir uma elevada
complexidade computacional, ser rápido, ter boa precisão e requerer moderada
utilização de recursos. A técnica é simples, mas se abordada como filtragem
monotónica, o resultado é uma indesejável imagem desfocada.
Dado o olho humano ser mais sensível ao contraste, introduziu-se uma
técnica para preservar os contornos que, na opinião do autor, é a sua principal
contribuição. Utilizaram-se imagens sintéticas e reais nos testes. As sintéticas,
compostas por fortes e suaves contrastes foram geradas por um algoritmo
desenvolvido. As reais foram capturadas com um sensor de alta resolução de
8-kbit (8192 tons) e escalonadas a 10 × 103 tons.
Um filtro com suavização polinomial de mínimos quadrados, SavitzkyGolay, foi usado como comparação. Possui 3 graus de liberdade: o tamanho da
janela, que varia o tamanho da relação de filtragem entre os pixels vizinhos; a
ordem da derivada, que varia a curvatura do filtro e os coeficientes polinomiais,
que variam a adaptabilidade da curva aos pontos a suavizar. O filtro de Média
Deslizante é apenas ajustável no tamanho da janela. Os testes revelaram-se
promissores nas 2ª e 4ª ordens polinomiais. Obtiveram-se resultados
qualitativos com o filtro Savitzky-Golay que detém melhores características na
preservação do sinal, especialmente em altas frequências.
Os algoritmos em FPGA foram implementados em registos de vírgula
fixa de 64-bits, servindo dois propósitos: aumentar a precisão, reduzindo o erro
comparativamente ao terem sido em vírgula flutuante; acomodar o efeito
cumulativo das multiplicações. Os resultados foram comparados com os
cálculos de 64-bits obtidos pelo MATLAB para verificar a diferença de erro
entre ambos. Os parâmetros de medida foram MSE, SNR e coeficiente de
Semelhança
Laterally constrained low-rank seismic data completion via cyclic-shear transform
A crucial step in seismic data processing consists in reconstructing the
wavefields at spatial locations where faulty or absent sources and/or receivers
result in missing data. Several developments in seismic acquisition and
interpolation strive to restore signals fragmented by sampling limitations;
still, seismic data frequently remain poorly sampled in the source, receiver,
or both coordinates. An intrinsic limitation of real-life dense acquisition
systems, which are often exceedingly expensive, is that they remain unable to
circumvent various physical and environmental obstacles, ultimately hindering a
proper recording scheme. In many situations, when the preferred reconstruction
method fails to render the actual continuous signals, subsequent imaging
studies are negatively affected by sampling artefacts. A recent alternative
builds on low-rank completion techniques to deliver superior restoration
results on seismic data, paving the way for data kernel compression that can
potentially unlock multiple modern processing methods so far prohibited in 3D
field scenarios. In this work, we propose a novel transform domain revealing
the low-rank character of seismic data that prevents the inherent matrix
enlargement introduced when the data are sorted in the midpoint-offset domain
and develop a robust extension of the current matrix completion framework to
account for lateral physical constraints that ensure a degree of proximity
similarity among neighbouring points. Our strategy successfully interpolates
missing sources and receivers simultaneously in synthetic and field data
Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models
To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented.
The modeling of increasing level of information is used to extract, represent and link image features to semantic content.
The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images
Recommended from our members
Hyperspectral unmixing: a theoretical aspect and applications to CRISM data processing
Hyperspectral imaging has been deployed in earth and planetary remote sensing, and has contributed the development of new methods for monitoring the earth environment and new discoveries in planetary science. It has given scientists and engineers a new way to observe the surface of earth and planetary bodies by measuring the spectroscopic spectrum at a pixel scale.
Hyperspectal images require complex processing before practical use. One of the important goals of hyperspectral imaging is to obtain the images of reflectance spectrum. A raw image obtained by hyperspectral remote sensing usually undergoes conversion to a physical quantity representing the intensity of light energy, called radiance. In order to obtain the reflectance spectrum of surface, the contribution of atmosphere needs to be addressed and then divided by a spectrum of ``white reference.\u27\u27 Furthermore, the obtained reflectance spectra of image pixels are likely to be the mixtures of multiple species due to limited spatial resolution from orbits around planets.
Hyperspectral unmixing is an attempt to unmix those pixels - to identify substantial components and estimate their fractional abundances. Hyperspectral unmixing has been widely explored in the literature, but there are still many aspects yet to be studied. The majority of research focuses on the development of methods to retrieve correct substantial components and accurate fractional abundances. Their theoretical aspects are rarely investigated. Chapter 2 will pursue a theoretical aspect of sparse unmixing, one of the hyperspectral unmixing problems and derive its theoretical conditions that guarantee the correct identification of substantial components.
Hyperspectral unmixing can also be used for other stages of hyperspectral data processing. Chapter 3 explores the application of hyperspectral unmixing to the processing of hyperspectral image acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) onboard the Mars Reconnaissance Orbiter (MRO). In particular, new atmospheric correction and de-noising methods for the CRISM data that use a hyperspectral unmixing to model surface spectra, are introduced. The new methods remove most of the problematic systematic artifacts present in CRISM images and significantly improve signal quality.
Chapter 4 investigates how hyperspectral images acquired from orbits can be combined with ground exploration. In the recent rush of the launch of many Martian ground rover missions, it is important to effectively integrate knowledge obtained by hyperspectral remote sensing from orbits into ground exploration for facilitating Martian exploration. In specific, this dissertation solves the problem of matching hyperspectral image pixels obtained by the CRISM with ground mega-pixel images acquired by the Mast Camera (Mastcam) installed on the Curiosity rover on Mars. A new systematic methodology to map the CRISM and Mastcam images onto high resolution surface topography is developed
Context-aware Facial Inpainting with GANs
Facial inpainting is a difficult problem due to the complex structural patterns of a face image. Using irregular hole masks to generate contextualised features in a face image is becoming increasingly important in image inpainting. Existing methods generate images using deep learning models, but aberrations persist. The reason for this is that key operations are required for feature information dissemination, such as feature extraction mechanisms, feature propagation, and feature regularizers, are frequently overlooked or ignored during the design stage. A comprehensive review is conducted to examine existing methods and identify the research gaps that serve as the foundation for this thesis.
The aim of this thesis is to develop novel facial inpainting algorithms with the capability of extracting contextualised features. First, Symmetric Skip Connection Wasserstein GAN (SWGAN) is proposed to inpaint high-resolution face images that are perceptually consistent with the rest of the image. Second, a perceptual adversarial Network (RMNet) is proposed to include feature extraction and feature propagation mechanisms that target missing regions while preserving visible ones. Third, a foreground-guided facial inpainting method is proposed with occlusion reasoning capability, which guides the model toward learning contextualised feature extraction and propagation while maintaining fidelity. Fourth, V-LinkNet is pro-posed that takes into account of the critical operations for information dissemination. Additionally, a standard protocol is introduced to prevent potential biases in performance evaluation of facial inpainting algorithms.
The experimental results show V-LinkNet achieved the best results with SSIM of 0.96 on the standard protocol. In conclusion, generating facial images with contextualised features is important to achieve realistic results in inpainted regions. Additionally, it is critical to consider the standard procedure while comparing different approaches. Finally, this thesis outlines the new insights and future directions of image inpainting
Courbure discrète : théorie et applications
International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor