30 research outputs found
Image = Structure + Few Colors
Topology plays an important role in computer vision by capturing
the structure of the objects. Nevertheless, its potential applications
have not been sufficiently developed yet. In this paper, we combine the
topological properties of an image with hierarchical approaches to build a
topology preserving irregular image pyramid (TIIP). The TIIP algorithm
uses combinatorial maps as data structure which implicitly capture the
structure of the image in terms of the critical points. Thus, we can achieve
a compact representation of an image, preserving the structure and topology
of its critical points (maxima, the minima and the saddles). The parallel
algorithmic complexity of building the pyramid is O(log d) where d is
the diameter of the largest object.We achieve promising results for image
reconstruction using only a few color values and the structure of the image,
although preserving fine details including the texture of the image
Convolutional neural network based on sparse graph attention mechanism for MRI super-resolution
Magnetic resonance imaging (MRI) is a valuable clinical tool for displaying
anatomical structures and aiding in accurate diagnosis. Medical image
super-resolution (SR) reconstruction using deep learning techniques can enhance
lesion analysis and assist doctors in improving diagnostic efficiency and
accuracy. However, existing deep learning-based SR methods predominantly rely
on convolutional neural networks (CNNs), which inherently limit the expressive
capabilities of these models and therefore make it challenging to discover
potential relationships between different image features. To overcome this
limitation, we propose an A-network that utilizes multiple convolution operator
feature extraction modules (MCO) for extracting image features using multiple
convolution operators. These extracted features are passed through multiple
sets of cross-feature extraction modules (MSC) to highlight key features
through inter-channel feature interactions, enabling subsequent feature
learning. An attention-based sparse graph neural network module is incorporated
to establish relationships between pixel features, learning which adjacent
pixels have the greatest impact on determining the features to be filled. To
evaluate our model's effectiveness, we conducted experiments using different
models on data generated from multiple datasets with different degradation
multiples, and the experimental results show that our method is a significant
improvement over the current state-of-the-art methods.Comment: 12 pages, 6 figure
Memory- and time-efficient dense network for single-image super-resolution
Abstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatenating feature‐maps stored in memory. To overcome this problem, a two‐step approach is proposed that learns the representative concatenating feature‐maps. Specifically, a convolutional layer with many more filters is used before concatenating layers to learn richer feature‐maps. Therefore, the irrelevant and redundant feature‐maps are discarded in the concatenating layers. The proposed method results in 24% and 6% less memory usage and test time, respectively, in comparison to single‐image super‐resolution (SISR) with the basic dense block. It also improves the peak signal‐to‐noise ratio by 0.24 dB. Moreover, the proposed method, while producing competitive results, decreases the number of filters in concatenating layers by at least a factor of 2 and reduces the memory consumption and test time by 40% and 12%, respectively. These results suggest that the proposed approach is a more practical method for SISR