367 research outputs found
Data-Driven Image Restoration
Every day many images are taken by digital cameras, and people
are demanding visually accurate and pleasing result. Noise and
blur degrade images captured by modern cameras, and high-level
vision tasks (such as segmentation, recognition, and tracking)
require high-quality images. Therefore, image restoration
specifically, image
deblurring and image denoising is a critical preprocessing step.
A fundamental problem in image deblurring is to recover reliably
distinct spatial frequencies that have been suppressed by the
blur kernel. Existing image deblurring techniques often rely on
generic image priors that only help recover part of the frequency
spectrum, such as the frequencies near the high-end. To this end,
we pose the following specific questions: (i) Does class-specific
information offer an advantage over existing generic priors for
image quality restoration? (ii) If a class-specific prior exists,
how should it be encoded into a deblurring framework to recover
attenuated image frequencies? Throughout this work, we devise a
class-specific prior based on the band-pass filter responses and
incorporate it into a deblurring strategy. Specifically, we show
that the subspace of band-pass filtered images and their
intensity distributions serve as useful priors for recovering
image frequencies.
Next, we present a novel image denoising algorithm that uses
external, category specific image database. In contrast to
existing noisy image restoration algorithms, our method selects
clean image âsupport patchesâ similar to the noisy patch from
an external database. We employ a content adaptive distribution
model for each patch where we derive the parameters of the
distribution from the support patches. Our objective function
composed of a Gaussian fidelity term that imposes category
specific information, and a low-rank term that encourages the
similarity between the noisy and the support patches in a robust
manner.
Finally, we propose to learn a fully-convolutional network model
that consists of a Chain of Identity Mapping Modules (CIMM) for
image denoising. The CIMM structure possesses two distinctive
features that are important for the noise removal task. Firstly,
each residual unit employs identity mappings as the skip
connections and receives pre-activated input to preserve the
gradient magnitude propagated in both the forward and backward
directions. Secondly, by utilizing dilated kernels for the
convolution layers in the residual branch, each neuron in the
last convolution layer of each module can observe the full
receptive field of the first layer
Low-rank and sparse recovery of human gait data
Due to occlusion or detached markers, information can often be lost while capturing human motion with optical tracking systems. Based on three natural properties of human gait movement, this study presents two different approaches to recover corrupted motion data. These properties are used to define a reconstruction model combining low-rank matrix completion of the measured data with a group-sparsity prior on the marker trajectories mapped in the frequency domain. Unlike most existing approaches, the proposed methodology is fully unsupervised and does not need training data or kinematic information of the user. We evaluated our methods on four different gait datasets with various gap lengths and compared their performance with a state-of-the-art approach using principal component analysis (PCA). Our results showed recovering missing data more precisely, with a reduction of at least 2 mm in mean reconstruction error compared to the literature method. When a small number of marker trajectories is available, our findings showed a reduction of more than 14 mm for the mean reconstruction error compared to the literature approach
Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal
Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames.
Quantitative and qualitative experiments validate the success of proposed algorithms
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Recommended from our members
Single atom imaging with time-resolved electron microscopy
Developments in scanning transmission electron microscopy (STEM) have opened
up new possibilities for time-resolved imaging at the atomic scale. However, rapid
imaging of single atom dynamics brings with it a new set of challenges, particularly
regarding noise and the interaction between the electron beam and the specimen. This
thesis develops a set of analytical tools for capturing atomic motion and analyzing the
dynamic behaviour of materials at the atomic scale.
Machine learning is increasingly playing an important role in the analysis of electron
microscopy data. In this light, new unsupervised learning tools are developed here for
noise removal under low-dose imaging conditions and for identifying the motion of
surface atoms. The scope for real-time processing and analysis is also explored, which is
of rising importance as electron microscopy datasets grow in size and complexity.
These advances in image processing and analysis are combined with computational
modelling to uncover new chemical and physical insights into the motion of atoms
adsorbed onto surfaces. Of particular interest are systems for heterogeneous catalysis,
where the catalytic activity can depend intimately on the atomic environment. The
study of Cu atoms on a graphene oxide support reveals that the atoms undergo
anomalous diffusion as a result of spatial and energetic disorder present in the substrate.
The investigation is extended to examine the structure and stability of small Cu clusters
on graphene oxide, with atomistic modelling used to understand the significant role
played by the substrate. Finally, the analytical methods are used to study the surface
reconstruction of silicon alongside the electron beam-induced motion of adatoms on
the surface.
Taken together, these studies demonstrate the materials insights that can be obtained
with time-resolved STEM imaging, and highlight the importance of combining state-ofthe-
art imaging with computational analysis and atomistic modelling to quantitatively
characterize the behaviour of materials with atomic resolution.The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007â2013)/ERC grant agreement 291522â3DIMAGE, as well as from the European Union Seventh Framework Programme under Grant Agreement 312483-ESTEEM2 (Integrated Infrastructure Initiative -I3)
Joint optimization of manifold learning and sparse representations for face and gesture analysis
Face and gesture understanding algorithms are powerful enablers in intelligent vision systems for surveillance, security, entertainment, and smart spaces. In the future, complex networks of sensors and cameras may disperse directions to lost tourists, perform directory lookups in the office lobby, or contact the proper authorities in case of an emergency. To be effective, these systems will need to embrace human subtleties while interacting with people in their natural conditions. Computer vision and machine learning techniques have recently become adept at solving face and gesture tasks using posed datasets in controlled conditions. However, spontaneous human behavior under unconstrained conditions, or in the wild, is more complex and is subject to considerable variability from one person to the next. Uncontrolled conditions such as lighting, resolution, noise, occlusions, pose, and temporal variations complicate the matter further. This thesis advances the field of face and gesture analysis by introducing a new machine learning framework based upon dimensionality reduction and sparse representations that is shown to be robust in posed as well as natural conditions. Dimensionality reduction methods take complex objects, such as facial images, and attempt to learn lower dimensional representations embedded in the higher dimensional data. These alternate feature spaces are computationally more efficient and often more discriminative. The performance of various dimensionality reduction methods on geometric and appearance based facial attributes are studied leading to robust facial pose and expression recognition models. The parsimonious nature of sparse representations (SR) has successfully been exploited for the development of highly accurate classifiers for various applications. Despite the successes of SR techniques, large dictionaries and high dimensional data can make these classifiers computationally demanding. Further, sparse classifiers are subject to the adverse effects of a phenomenon known as coefficient contamination, where for example variations in pose may affect identity and expression recognition. This thesis analyzes the interaction between dimensionality reduction and sparse representations to present a unified sparse representation classification framework that addresses both issues of computational complexity and coefficient contamination. Semi-supervised dimensionality reduction is shown to mitigate the coefficient contamination problems associated with SR classifiers. The combination of semi-supervised dimensionality reduction with SR systems forms the cornerstone for a new face and gesture framework called Manifold based Sparse Representations (MSR). MSR is shown to deliver state-of-the-art facial understanding capabilities. To demonstrate the applicability of MSR to new domains, MSR is expanded to include temporal dynamics. The joint optimization of dimensionality reduction and SRs for classification purposes is a relatively new field. The combination of both concepts into a single objective function produce a relation that is neither convex, nor directly solvable. This thesis studies this problem to introduce a new jointly optimized framework. This framework, termed LGE-KSVD, utilizes variants of Linear extension of Graph Embedding (LGE) along with modified K-SVD dictionary learning to jointly learn the dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based classifier. By injecting LGE concepts directly into the K-SVD learning procedure, this research removes the support constraints K-SVD imparts on dictionary element discovery. Results are shown for facial recognition, facial expression recognition, human activity analysis, and with the addition of a concept called active difference signatures, delivers robust gesture recognition from Kinect or similar depth cameras
- âŠ