214 research outputs found
Advanced Visual Computing for Image Saliency Detection
Saliency detection is a category of computer vision algorithms that aims to filter out the most salient object in a given image. Existing saliency detection methods can generally be categorized as bottom-up methods and top-down methods, and the prevalent deep neural network (DNN) has begun to show its applications in saliency detection in recent years. However, the challenges in existing methods, such as problematic pre-assumption, inefficient feature integration and absence of high-level feature learning, prevent them from superior performances. In this thesis, to address the limitations above, we have proposed multiple novel models with favorable performances. Specifically, we first systematically reviewed the developments of saliency detection and its related works, and then proposed four new methods, with two based on low-level image features, and two based on DNNs. The regularized random walks ranking method (RR) and its reversion-correction-improved version (RCRR) are based on conventional low-level image features, which exhibit higher accuracy and robustness in extracting the image boundary based foreground / background queries; while the background search and foreground estimation (BSFE) and dense and sparse labeling (DSL) methods are based on DNNs, which have shown their dominant advantages in high-level image feature extraction, as well as the combined strength of multi-dimensional features. Each of the proposed methods is evaluated by extensive experiments, and all of them behave favorably against the state-of-the-art, especially the DSL method, which achieves remarkably higher performance against sixteen state-of-the-art methods (including ten conventional methods and six learning based methods) on six well-recognized public datasets. The successes of our proposed methods reveal more potential and meaningful applications of saliency detection in real-life computer vision tasks
Robust saliency detection via regularized random walks ranking
In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches
A graph-based mathematical morphology reader
This survey paper aims at providing a "literary" anthology of mathematical
morphology on graphs. It describes in the English language many ideas stemming
from a large number of different papers, hence providing a unified view of an
active and diverse field of research
3D time series analysis of cell shape using Laplacian approaches
Background:
Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes.
Results:
We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells.
Conclusions:
The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations
Deep Networks Based Energy Models for Object Recognition from Multimodality Images
Object recognition has been extensively investigated in computer vision area, since it is a fundamental and essential technique in many important applications, such as robotics, auto-driving, automated manufacturing, and security surveillance. According to the selection criteria, object recognition mechanisms can be broadly categorized into object proposal and classification, eye fixation prediction and saliency object detection. Object proposal tends to capture all potential objects from natural images, and then classify them into predefined groups for image description and interpretation. For a given natural image, human perception is normally attracted to the most visually important regions/objects. Therefore, eye fixation prediction attempts to localize some interesting points or small regions according to human visual system (HVS). Based on these interesting points and small regions, saliency object detection algorithms propagate the important extracted information to achieve a refined segmentation of the whole salient objects. In addition to natural images, object recognition also plays a critical role in clinical practice. The informative insights of anatomy and function of human body obtained from multimodality biomedical images such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS), computed tomography (CT) and positron emission tomography (PET) facilitate the precision medicine. Automated object recognition from biomedical images empowers the non-invasive diagnosis and treatments via automated tissue segmentation, tumor detection and cancer staging. The conventional recognition methods normally utilize handcrafted features (such as oriented gradients, curvature, Haar features, Haralick texture features, Laws energy features, etc.) depending on the image modalities and object characteristics. It is challenging to have a general model for object recognition. Superior to handcrafted features, deep neural networks (DNN) can extract self-adaptive features corresponding with specific task, hence can be employed for general object recognition models. These DNN-features are adjusted semantically and cognitively by over tens of millions parameters corresponding to the mechanism of human brain, therefore leads to more accurate and robust results. Motivated by it, in this thesis, we proposed DNN-based energy models to recognize object on multimodality images. For the aim of object recognition, the major contributions of this thesis can be summarized below: 1. We firstly proposed a new comprehensive autoencoder model to recognize the position and shape of prostate from magnetic resonance images. Different from the most autoencoder-based methods, we focused on positive samples to train the model in which the extracted features all come from prostate. After that, an image energy minimization scheme was applied to further improve the recognition accuracy. The proposed model was compared with three classic classifiers (i.e. support vector machine with radial basis function kernel, random forest, and naive Bayes), and demonstrated significant superiority for prostate recognition on magnetic resonance images. We further extended the proposed autoencoder model for saliency object detection on natural images, and the experimental validation proved the accurate and robust saliency object detection results of our model. 2. A general multi-contexts combined deep neural networks (MCDN) model was then proposed for object recognition from natural images and biomedical images. Under one uniform framework, our model was performed in multi-scale manner. Our model was applied for saliency object detection from natural images as well as prostate recognition from magnetic resonance images. Our experimental validation demonstrated that the proposed model was competitive to current state-of-the-art methods. 3. We designed a novel saliency image energy to finely segment salient objects on basis of our MCDN model. The region priors were taken into account in the energy function to avoid trivial errors. Our method outperformed state-of-the-art algorithms on five benchmarking datasets. In the experiments, we also demonstrated that our proposed saliency image energy can boost the results of other conventional saliency detection methods
κ°μΈν λνν μμ λΆν μκ³ λ¦¬μ¦μ μν μλ μ 보 νμ₯ κΈ°λ²μ λν μ°κ΅¬
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2021. 2. μ΄κ²½λ¬΄.Segmentation of an area corresponding to a desired object in an image is essential
to computer vision problems. This is because most algorithms are performed in
semantic units when interpreting or analyzing images. However, segmenting the
desired object from a given image is an ambiguous issue. The target object varies
depending on user and purpose. To solve this problem, an interactive segmentation
technique has been proposed. In this approach, segmentation was performed in the
desired direction according to interaction with the user. In this case, seed information
provided by the user plays an important role. If the seed provided by a user contain
abundant information, the accuracy of segmentation increases. However, providing
rich seed information places much burden on the users. Therefore, the main goal of
the present study was to obtain satisfactory segmentation results using simple seed
information.
We primarily focused on converting the provided sparse seed information to a rich
state so that accurate segmentation results can be derived. To this end, a minimum
user input was taken and enriched it through various seed enrichment techniques.
A total of three interactive segmentation techniques was proposed based on: (1)
Seed Expansion, (2) Seed Generation, (3) Seed Attention. Our seed enriching type
comprised expansion of area around a seed, generation of new seed in a new position,
and attention to semantic information.
First, in seed expansion, we expanded the scope of the seed. We integrated reliable
pixels around the initial seed into the seed set through an expansion step
composed of two stages. Through the extended seed covering a wider area than the
initial seed, the seed's scarcity and imbalance problems was resolved. Next, in seed
generation, we created a seed at a new point, but not around the seed. We trained
the system by imitating the user behavior through providing a new seed point in the
erroneous region. By learning the user's intention, our model could e ciently create
a new seed point. The generated seed helped segmentation and could be used as additional
information for weakly supervised learning. Finally, through seed attention,
we put semantic information in the seed. Unlike the previous models, we integrated
both the segmentation process and seed enrichment process. We reinforced the seed
information by adding semantic information to the seed instead of spatial expansion.
The seed information was enriched through mutual attention with feature maps
generated during the segmentation process.
The proposed models show superiority compared to the existing techniques
through various experiments. To note, even with sparse seed information, our proposed
seed enrichment technique gave by far more accurate segmentation results
than the other existing methods.μμμμ μνλ 물체 μμμ μλΌλ΄λ κ²μ μ»΄ν¨ν° λΉμ λ¬Έμ μμ νμμ μΈ μμμ΄λ€. μμμ ν΄μνκ±°λ λΆμν λ, λλΆλΆμ μκ³ λ¦¬μ¦λ€μ΄ μλ―Έλ‘ μ μΈ λ¨μ κΈ°λ°μΌλ‘ λμνκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ μμμμ 물체 μμμ λΆν νλ κ²μ λͺ¨νΈν λ¬Έμ μ΄λ€. μ¬μ©μμ λͺ©μ μ λ°λΌ μνλ 물체 μμμ΄ λ¬λΌμ§κΈ° λλ¬Έμ΄λ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄ μ¬μ©μμμ κ΅λ₯λ₯Ό ν΅ν΄ μνλ λ°©ν₯μΌλ‘ μμ λΆν μ μ§ννλ λνν μμ λΆν κΈ°λ²μ΄ μ¬μ©λλ€. μ¬κΈ°μ μ¬μ©μκ° μ 곡νλ μλ μ λ³΄κ° μ€μν μν μ νλ€. μ¬μ©μμ μλλ₯Ό λ΄κ³ μλ μλ μ λ³΄κ° μ νν μλ‘ μμ λΆν μ μ νλλ μ¦κ°νκ² λλ€. κ·Έλ¬λ νλΆν μλ μ 보λ₯Ό μ 곡νλ κ²μ μ¬μ©μμκ² λ§μ λΆλ΄μ μ£Όκ² λλ€. κ·Έλ¬λ―λ‘ κ°λ¨ν μλ μ 보λ₯Ό μ¬μ©νμ¬ λ§μ‘±ν λ§ν λΆν κ²°κ³Όλ₯Ό μ»λ κ²μ΄ μ£Όμ λͺ©μ μ΄ λλ€.
μ°λ¦¬λ μ 곡λ ν¬μν μλ μ 보λ₯Ό λ³ννλ μμ
μ μ΄μ μ λμλ€. λ§μ½ μλ μ λ³΄κ° νλΆνκ² λ³νλλ€λ©΄ μ νν μμ λΆν κ²°κ³Όλ₯Ό μ»μ μ μκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ―λ‘ λ³Έ νμ λ
Όλ¬Έμμλ μλ μ 보λ₯Ό νλΆνκ² νλ κΈ°λ²λ€μ μ μνλ€. μ΅μνμ μ¬μ©μ μ
λ ₯μ κ°μ νκ³ μ΄λ₯Ό λ€μν μλ νμ₯ κΈ°λ²μ ν΅ν΄ λ³ννλ€. μ°λ¦¬λ μλ νλ, μλ μμ±, μλ μ£Όμ μ§μ€μ κΈ°λ°ν μ΄ μΈ κ°μ§μ λνν μμ λΆν κΈ°λ²μ μ μνλ€. κ°κ° μλ μ£Όλ³μΌλ‘μ μμ νλ, μλ‘μ΄ μ§μ μ μλ μμ±, μλ―Έλ‘ μ μ 보μ μ£Όλͺ©νλ ννμ μλ νμ₯ κΈ°λ²μ μ¬μ©νλ€.
λ¨Όμ μλ νλμ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλμ μμ νμ₯μ λͺ©νλ‘ νλ€. λ λ¨κ³λ‘ ꡬμ±λ νλ κ³Όμ μ ν΅ν΄ μ²μ μλ μ£Όλ³μ λΉμ·ν ν½μ
λ€μ μλ μμμΌλ‘ νΈμ
νλ€. μ΄λ κ² νμ₯λ μλλ₯Ό μ¬μ©ν¨μΌλ‘μ¨ μλμ ν¬μν¨κ³Ό λΆκ· νμΌλ‘ μΈν λ¬Έμ λ₯Ό ν΄κ²°ν μ μλ€. λ€μμΌλ‘ μλ μμ±μ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλ μ£Όλ³μ΄ μλ μλ‘μ΄ μ§μ μ μλλ₯Ό μμ±νλ€. μ°λ¦¬λ μ€μ°¨κ° λ°μν μμμ μ¬μ©μκ° μλ‘μ΄ μλλ₯Ό μ 곡νλ λμμ λͺ¨λ°©νμ¬ μμ€ν
μ νμ΅νμλ€. μ¬μ©μμ μλλ₯Ό νμ΅ν¨μΌλ‘μ¨ ν¨κ³Όμ μΌλ‘ μλλ₯Ό μμ±ν μ μλ€. μμ±λ μλλ μμ λΆν μ μ νλλ₯Ό λμΌ λΏλ§ μλλΌ μ½μ§λνμ΅μ μν λ°μ΄ν°λ‘μ¨ νμ©λ μ μλ€. λ§μ§λ§μΌλ‘ μλ μ£Όμ μ§μ€μ νμ©ν κΈ°λ²μμ μ°λ¦¬λ μλ―Έλ‘ μ μ 보λ₯Ό μλμ λ΄λλ€. κΈ°μ‘΄μ μ μν κΈ°λ²λ€κ³Ό λ¬λ¦¬ μμ λΆν λμκ³Ό μλ νμ₯ λμμ΄ ν΅ν©λ λͺ¨λΈμ μ μνλ€. μλ μ 보λ μμ λΆν λ€νΈμν¬μ νΉμ§λ§΅κ³Ό μνΈ κ΅λ₯νλ©° κ·Έ μ λ³΄κ° νλΆν΄μ§λ€.
μ μν λͺ¨λΈλ€μ λ€μν μ€νμ ν΅ν΄ κΈ°μ‘΄ κΈ°λ² λλΉ μ°μν μ±λ₯μ κΈ°λ‘νμλ€. νΉν μλκ° λΆμ‘±ν μν©μμ μλ νμ₯ κΈ°λ²λ€μ νλ₯ν λνν μμ λΆν μ±λ₯μ 보μλ€.1 Introduction 1
1.1 Previous Works 2
1.2 Proposed Methods 4
2 Interactive Segmentation with Seed Expansion 9
2.1 Introduction 9
2.2 Proposed Method 12
2.2.1 Background 13
2.2.2 Pyramidal RWR 16
2.2.3 Seed Expansion 19
2.2.4 Re nement with Global Information 24
2.3 Experiments 27
2.3.1 Dataset 27
2.3.2 Implement Details 28
2.3.3 Performance 29
2.3.4 Contribution of Each Part 30
2.3.5 Seed Consistency 31
2.3.6 Running Time 33
2.4 Summary 34
3 Interactive Segmentation with Seed Generation 37
3.1 Introduction 37
3.2 Related Works 40
3.3 Proposed Method 41
3.3.1 System Overview 41
3.3.2 Markov Decision Process 42
3.3.3 Deep Q-Network 46
3.3.4 Model Architecture 47
3.4 Experiments 48
3.4.1 Implement Details 48
3.4.2 Performance 49
3.4.3 Ablation Study 53
3.4.4 Other Datasets 55
3.5 Summary 58
4 Interactive Segmentation with Seed Attention 61
4.1 Introduction 61
4.2 Related Works 64
4.3 Proposed Method 65
4.3.1 Interactive Segmentation Network 65
4.3.2 Bi-directional Seed Attention Module 67
4.4 Experiments 70
4.4.1 Datasets 70
4.4.2 Metrics 70
4.4.3 Implement Details 71
4.4.4 Performance 71
4.4.5 Ablation Study 76
4.4.6 Seed enrichment methods 79
4.5 Summary 82
5 Conclusions 87
5.1 Summary 89
Bibliography 90
κ΅λ¬Έμ΄λ‘ 103Docto
- β¦