30,549 research outputs found
A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering
abstract: The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods
Recommended from our members
Depth-adaptive methodologies for 3D image caregorization.
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Image classification is an active topic of computer vision research. This topic
deals with the learning of patterns in order to allow efficient classification of visual
information. However, most research efforts have focused on 2D image classification.
In recent years, advances of 3D imaging enabled the development of applications and
provided new research directions. In this thesis, we present methodologies and techniques for image classification using 3D image data. We conducted our research focusing on the attributes and
limitations of depth information regarding possible uses. This research led us to the
development of depth feature extraction methodologies that contribute to the representation
of images thus enhancing the recognition efficiency. We proposed a new
classification algorithm that adapts to the need of image representations by implementing
a scale-based decision that exploits discriminant parts of representations.
Learning from the design of image representation methods, we introduced our own
which describes each image by its depicting content providing more discriminative image
representation. We also propose a dictionary learning method that exploits the
relation of training features by assessing the similarity of features originating from
similar context regions. Finally, we present our research on deep learning algorithms
combined with data and techniques used in 3D imaging. Our novel methods provide
state-of-the-art results, thus contributing to the research of 3D image classificatio
- …