4,564 research outputs found
4D Seismic History Matching Incorporating Unsupervised Learning
The work discussed and presented in this paper focuses on the history
matching of reservoirs by integrating 4D seismic data into the inversion
process using machine learning techniques. A new integrated scheme for the
reconstruction of petrophysical properties with a modified Ensemble Smoother
with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed.
The permeability field inside the reservoir is parametrised with an
unsupervised learning approach, namely K-means with Singular Value
Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit
(OMP) technique which is very typical for sparsity promoting regularisation
schemes. Moreover, seismic attributes, in particular, acoustic impedance, are
parametrised with the Discrete Cosine Transform (DCT). This novel combination
of techniques from machine learning, sparsity regularisation, seismic imaging
and history matching aims to address the ill-posedness of the inversion of
historical production data efficiently using ES-MDA. In the numerical
experiments provided, I demonstrate that these sparse representations of the
petrophysical properties and the seismic attributes enables to obtain better
production data matches to the true production data and to quantify the
propagating waterfront better compared to more traditional methods that do not
use comparable parametrisation techniques
Comparison of DCT, SVD and BFOA based multimodal biometric watermarking systems
AbstractDigital image watermarking is a major domain for hiding the biometric information, in which the watermark data are made to be concealed inside a host image imposing imperceptible change in the picture. Due to the advance in digital image watermarking, the majority of research aims to make a reliable improvement in robustness to prevent the attack. The reversible invisible watermarking scheme is used for fingerprint and iris multimodal biometric system. A novel approach is used for fusing different biometric modalities. Individual unique modalities of fingerprint and iris biometric are extracted and fused using different fusion techniques. The performance of different fusion techniques is evaluated and the Discrete Wavelet Transform fusion method is identified as the best. Then the best fused biometric template is watermarked into a cover image. The various watermarking techniques such as the Discrete Cosine Transform (DCT), Singular Value Decomposition (SVD) and Bacterial Foraging Optimization Algorithm (BFOA) are implemented to the fused biometric feature image. Performance of watermarking systems is compared using different metrics. It is found that the watermarked images are found robust over different attacks and they are able to reverse the biometric template for Bacterial Foraging Optimization Algorithm (BFOA) watermarking technique
Multi-modal dictionary learning for image separation with application in art investigation
In support of art investigation, we propose a new source separation method
that unmixes a single X-ray scan acquired from double-sided paintings. In this
problem, the X-ray signals to be separated have similar morphological
characteristics, which brings previous source separation methods to their
limits. Our solution is to use photographs taken from the front and back-side
of the panel to drive the separation process. The crux of our approach relies
on the coupling of the two imaging modalities (photographs and X-rays) using a
novel coupled dictionary learning framework able to capture both common and
disparate features across the modalities using parsimonious representations;
the common component models features shared by the multi-modal images, whereas
the innovation component captures modality-specific information. As such, our
model enables the formulation of appropriately regularized convex optimization
procedures that lead to the accurate separation of the X-rays. Our dictionary
learning framework can be tailored both to a single- and a multi-scale
framework, with the latter leading to a significant performance improvement.
Moreover, to improve further on the visual quality of the separated images, we
propose to train coupled dictionaries that ignore certain parts of the painting
corresponding to craquelure. Experimentation on synthetic and real data - taken
from digital acquisition of the Ghent Altarpiece (1432) - confirms the
superiority of our method against the state-of-the-art morphological component
analysis technique that uses either fixed or trained dictionaries to perform
image separation.Comment: submitted to IEEE Transactions on Images Processin
Information recovery from rank-order encoded images
The time to detection of a visual stimulus by the primate eye is recorded at
100 ā 150ms. This near instantaneous recognition is in spite of the considerable
processing required by the several stages of the visual pathway to recognise and
react to a visual scene. How this is achieved is still a matter of speculation.
Rank-order codes have been proposed as a means of encoding by the primate
eye in the rapid transmission of the initial burst of information from the sensory
neurons to the brain. We study the efficiency of rank-order codes in encoding
perceptually-important information in an image. VanRullen and Thorpe built a
model of the ganglion cell layers of the retina to simulate and study the viability
of rank-order as a means of encoding by retinal neurons. We validate their model
and quantify the information retrieved from rank-order encoded images in terms
of the visually-important information recovered. Towards this goal, we apply
the āperceptual information preservation algorithmā, proposed by Petrovic and
Xydeas after slight modification. We observe a low information recovery due
to losses suffered during the rank-order encoding and decoding processes. We
propose to minimise these losses to recover maximum information in minimum
time from rank-order encoded images. We first maximise information recovery by
using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder
decoding. We then apply the biological principle of lateral inhibition to
minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap
Correction algorithm. To test the perfomance of rank-order codes in
a biologically realistic model, we design and simulate a model of the foveal-pit
ganglion cells of the retina keeping close to biological parameters. We use this
as a rank-order encoder and analyse its performance relative to VanRullen and
Thorpeās retinal model
- ā¦