10 research outputs found
Learning-Based Dequantization For Image Restoration Against Extremely Poor Illumination
All existing image enhancement methods, such as HDR tone mapping, cannot
recover A/D quantization losses due to insufficient or excessive lighting,
(underflow and overflow problems). The loss of image details due to A/D
quantization is complete and it cannot be recovered by traditional image
processing methods, but the modern data-driven machine learning approach offers
a much needed cure to the problem. In this work we propose a novel approach to
restore and enhance images acquired in low and uneven lighting. First, the ill
illumination is algorithmically compensated by emulating the effects of
artificial supplementary lighting. Then a DCNN trained using only synthetic
data recovers the missing detail caused by quantization
Data comparison schemes for Pattern Recognition in Digital Images using Fractals
Pattern recognition in digital images is a common problem with application in
remote sensing, electron microscopy, medical imaging, seismic imaging and
astrophysics for example. Although this subject has been researched for over
twenty years there is still no general solution which can be compared with the
human cognitive system in which a pattern can be recognised subject to
arbitrary orientation and scale.
The application of Artificial Neural Networks can in principle provide a very
general solution providing suitable training schemes are implemented.
However, this approach raises some major issues in practice. First, the CPU
time required to train an ANN for a grey level or colour image can be very
large especially if the object has a complex structure with no clear geometrical
features such as those that arise in remote sensing applications. Secondly,
both the core and file space memory required to represent large images and
their associated data tasks leads to a number of problems in which the use of
virtual memory is paramount.
The primary goal of this research has been to assess methods of image data
compression for pattern recognition using a range of different compression
methods. In particular, this research has resulted in the design and
implementation of a new algorithm for general pattern recognition based on
the use of fractal image compression.
This approach has for the first time allowed the pattern recognition problem to
be solved in a way that is invariant of rotation and scale. It allows both ANNs
and correlation to be used subject to appropriate pre-and post-processing
techniques for digital image processing on aspect for which a dedicated
programmer's work bench has been developed using X-Designer
Scalable exploration of highly detailed and annotated 3D models
With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D
graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling
systems, high-quality 3D models are becoming increasingly common, and are now potentially
available for ubiquitous exploration.
In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for
download are mostly presented through a few user-selected static images. Online exploration is
limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic
rendering quality of complex synthetic environments is still hardly achievable within the
real-time constraints of interactive applications, especially on on low-powered mobile devices or
script-based Internet browsers.
Moreover, navigating inside 3D environments, especially on the now pervasive touch devices,
is a non-trivial task, and usability is consistently improved by employing assisted navigation
controls. In addition, 3D annotations are often used in order to integrate and enhance the visual
information by providing spatially coherent contextual information, typically at the expense of
introducing visual cluttering.
In this thesis, we focus on efficient representations for interactive exploration and understanding
of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several
approaches exploiting constraints on the data representation for improving the streaming and
rendering performance, and camera movement constraints in order to provide scalable navigation
methods for interactive exploration of complex 3D environments.
Furthermore, we study visualization and interaction techniques to improve the exploration
and understanding of complex 3D models by exploiting guided motion control techniques to aid
the user in discovering contextual information while avoiding cluttering the visualization.
We demonstrate the effectiveness and scalability of our approaches both in large screen museum
installations and in mobile devices, by performing interactive exploration of models ranging
from 9Mtriangles to 940Mtriangles
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms.
The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics.
Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained.
On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance.
On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed.
The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications
Technology 2004, Vol. 2
Proceedings from symposia of the Technology 2004 Conference, November 8-10, 1994, Washington, DC. Volume 2 features papers on computers and software, virtual reality simulation, environmental technology, video and imaging, medical technology and life sciences, robotics and artificial intelligence, and electronics