2,016 research outputs found
κ°μΈν λνν μμ λΆν μκ³ λ¦¬μ¦μ μν μλ μ 보 νμ₯ κΈ°λ²μ λν μ°κ΅¬
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2021. 2. μ΄κ²½λ¬΄.Segmentation of an area corresponding to a desired object in an image is essential
to computer vision problems. This is because most algorithms are performed in
semantic units when interpreting or analyzing images. However, segmenting the
desired object from a given image is an ambiguous issue. The target object varies
depending on user and purpose. To solve this problem, an interactive segmentation
technique has been proposed. In this approach, segmentation was performed in the
desired direction according to interaction with the user. In this case, seed information
provided by the user plays an important role. If the seed provided by a user contain
abundant information, the accuracy of segmentation increases. However, providing
rich seed information places much burden on the users. Therefore, the main goal of
the present study was to obtain satisfactory segmentation results using simple seed
information.
We primarily focused on converting the provided sparse seed information to a rich
state so that accurate segmentation results can be derived. To this end, a minimum
user input was taken and enriched it through various seed enrichment techniques.
A total of three interactive segmentation techniques was proposed based on: (1)
Seed Expansion, (2) Seed Generation, (3) Seed Attention. Our seed enriching type
comprised expansion of area around a seed, generation of new seed in a new position,
and attention to semantic information.
First, in seed expansion, we expanded the scope of the seed. We integrated reliable
pixels around the initial seed into the seed set through an expansion step
composed of two stages. Through the extended seed covering a wider area than the
initial seed, the seed's scarcity and imbalance problems was resolved. Next, in seed
generation, we created a seed at a new point, but not around the seed. We trained
the system by imitating the user behavior through providing a new seed point in the
erroneous region. By learning the user's intention, our model could e ciently create
a new seed point. The generated seed helped segmentation and could be used as additional
information for weakly supervised learning. Finally, through seed attention,
we put semantic information in the seed. Unlike the previous models, we integrated
both the segmentation process and seed enrichment process. We reinforced the seed
information by adding semantic information to the seed instead of spatial expansion.
The seed information was enriched through mutual attention with feature maps
generated during the segmentation process.
The proposed models show superiority compared to the existing techniques
through various experiments. To note, even with sparse seed information, our proposed
seed enrichment technique gave by far more accurate segmentation results
than the other existing methods.μμμμ μνλ 물체 μμμ μλΌλ΄λ κ²μ μ»΄ν¨ν° λΉμ λ¬Έμ μμ νμμ μΈ μμμ΄λ€. μμμ ν΄μνκ±°λ λΆμν λ, λλΆλΆμ μκ³ λ¦¬μ¦λ€μ΄ μλ―Έλ‘ μ μΈ λ¨μ κΈ°λ°μΌλ‘ λμνκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ μμμμ 물체 μμμ λΆν νλ κ²μ λͺ¨νΈν λ¬Έμ μ΄λ€. μ¬μ©μμ λͺ©μ μ λ°λΌ μνλ 물체 μμμ΄ λ¬λΌμ§κΈ° λλ¬Έμ΄λ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄ μ¬μ©μμμ κ΅λ₯λ₯Ό ν΅ν΄ μνλ λ°©ν₯μΌλ‘ μμ λΆν μ μ§ννλ λνν μμ λΆν κΈ°λ²μ΄ μ¬μ©λλ€. μ¬κΈ°μ μ¬μ©μκ° μ 곡νλ μλ μ λ³΄κ° μ€μν μν μ νλ€. μ¬μ©μμ μλλ₯Ό λ΄κ³ μλ μλ μ λ³΄κ° μ νν μλ‘ μμ λΆν μ μ νλλ μ¦κ°νκ² λλ€. κ·Έλ¬λ νλΆν μλ μ 보λ₯Ό μ 곡νλ κ²μ μ¬μ©μμκ² λ§μ λΆλ΄μ μ£Όκ² λλ€. κ·Έλ¬λ―λ‘ κ°λ¨ν μλ μ 보λ₯Ό μ¬μ©νμ¬ λ§μ‘±ν λ§ν λΆν κ²°κ³Όλ₯Ό μ»λ κ²μ΄ μ£Όμ λͺ©μ μ΄ λλ€.
μ°λ¦¬λ μ 곡λ ν¬μν μλ μ 보λ₯Ό λ³ννλ μμ
μ μ΄μ μ λμλ€. λ§μ½ μλ μ λ³΄κ° νλΆνκ² λ³νλλ€λ©΄ μ νν μμ λΆν κ²°κ³Όλ₯Ό μ»μ μ μκΈ° λλ¬Έμ΄λ€. κ·Έλ¬λ―λ‘ λ³Έ νμ λ
Όλ¬Έμμλ μλ μ 보λ₯Ό νλΆνκ² νλ κΈ°λ²λ€μ μ μνλ€. μ΅μνμ μ¬μ©μ μ
λ ₯μ κ°μ νκ³ μ΄λ₯Ό λ€μν μλ νμ₯ κΈ°λ²μ ν΅ν΄ λ³ννλ€. μ°λ¦¬λ μλ νλ, μλ μμ±, μλ μ£Όμ μ§μ€μ κΈ°λ°ν μ΄ μΈ κ°μ§μ λνν μμ λΆν κΈ°λ²μ μ μνλ€. κ°κ° μλ μ£Όλ³μΌλ‘μ μμ νλ, μλ‘μ΄ μ§μ μ μλ μμ±, μλ―Έλ‘ μ μ 보μ μ£Όλͺ©νλ ννμ μλ νμ₯ κΈ°λ²μ μ¬μ©νλ€.
λ¨Όμ μλ νλμ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλμ μμ νμ₯μ λͺ©νλ‘ νλ€. λ λ¨κ³λ‘ ꡬμ±λ νλ κ³Όμ μ ν΅ν΄ μ²μ μλ μ£Όλ³μ λΉμ·ν ν½μ
λ€μ μλ μμμΌλ‘ νΈμ
νλ€. μ΄λ κ² νμ₯λ μλλ₯Ό μ¬μ©ν¨μΌλ‘μ¨ μλμ ν¬μν¨κ³Ό λΆκ· νμΌλ‘ μΈν λ¬Έμ λ₯Ό ν΄κ²°ν μ μλ€. λ€μμΌλ‘ μλ μμ±μ κΈ°λ°ν κΈ°λ²μμ μ°λ¦¬λ μλ μ£Όλ³μ΄ μλ μλ‘μ΄ μ§μ μ μλλ₯Ό μμ±νλ€. μ°λ¦¬λ μ€μ°¨κ° λ°μν μμμ μ¬μ©μκ° μλ‘μ΄ μλλ₯Ό μ 곡νλ λμμ λͺ¨λ°©νμ¬ μμ€ν
μ νμ΅νμλ€. μ¬μ©μμ μλλ₯Ό νμ΅ν¨μΌλ‘μ¨ ν¨κ³Όμ μΌλ‘ μλλ₯Ό μμ±ν μ μλ€. μμ±λ μλλ μμ λΆν μ μ νλλ₯Ό λμΌ λΏλ§ μλλΌ μ½μ§λνμ΅μ μν λ°μ΄ν°λ‘μ¨ νμ©λ μ μλ€. λ§μ§λ§μΌλ‘ μλ μ£Όμ μ§μ€μ νμ©ν κΈ°λ²μμ μ°λ¦¬λ μλ―Έλ‘ μ μ 보λ₯Ό μλμ λ΄λλ€. κΈ°μ‘΄μ μ μν κΈ°λ²λ€κ³Ό λ¬λ¦¬ μμ λΆν λμκ³Ό μλ νμ₯ λμμ΄ ν΅ν©λ λͺ¨λΈμ μ μνλ€. μλ μ 보λ μμ λΆν λ€νΈμν¬μ νΉμ§λ§΅κ³Ό μνΈ κ΅λ₯νλ©° κ·Έ μ λ³΄κ° νλΆν΄μ§λ€.
μ μν λͺ¨λΈλ€μ λ€μν μ€νμ ν΅ν΄ κΈ°μ‘΄ κΈ°λ² λλΉ μ°μν μ±λ₯μ κΈ°λ‘νμλ€. νΉν μλκ° λΆμ‘±ν μν©μμ μλ νμ₯ κΈ°λ²λ€μ νλ₯ν λνν μμ λΆν μ±λ₯μ 보μλ€.1 Introduction 1
1.1 Previous Works 2
1.2 Proposed Methods 4
2 Interactive Segmentation with Seed Expansion 9
2.1 Introduction 9
2.2 Proposed Method 12
2.2.1 Background 13
2.2.2 Pyramidal RWR 16
2.2.3 Seed Expansion 19
2.2.4 Re nement with Global Information 24
2.3 Experiments 27
2.3.1 Dataset 27
2.3.2 Implement Details 28
2.3.3 Performance 29
2.3.4 Contribution of Each Part 30
2.3.5 Seed Consistency 31
2.3.6 Running Time 33
2.4 Summary 34
3 Interactive Segmentation with Seed Generation 37
3.1 Introduction 37
3.2 Related Works 40
3.3 Proposed Method 41
3.3.1 System Overview 41
3.3.2 Markov Decision Process 42
3.3.3 Deep Q-Network 46
3.3.4 Model Architecture 47
3.4 Experiments 48
3.4.1 Implement Details 48
3.4.2 Performance 49
3.4.3 Ablation Study 53
3.4.4 Other Datasets 55
3.5 Summary 58
4 Interactive Segmentation with Seed Attention 61
4.1 Introduction 61
4.2 Related Works 64
4.3 Proposed Method 65
4.3.1 Interactive Segmentation Network 65
4.3.2 Bi-directional Seed Attention Module 67
4.4 Experiments 70
4.4.1 Datasets 70
4.4.2 Metrics 70
4.4.3 Implement Details 71
4.4.4 Performance 71
4.4.5 Ablation Study 76
4.4.6 Seed enrichment methods 79
4.5 Summary 82
5 Conclusions 87
5.1 Summary 89
Bibliography 90
κ΅λ¬Έμ΄λ‘ 103Docto
Application of Fast Deviation Correction Algorithm Based on Shape Matching Algorithm in Component Placement
For contradiction PC template matching between accuracy and speed, combined with the advantages of FPGA high speed parallel computing. This paper presents a FPGA-based rapid correction shape matching algorithm. Mainly in the FPGA, using shape matching and least squares method to calculate the angular deviation chip components. Use single instruction stream algorithm acceleration. Experimental results show that compared with traditional PC template matching algorithms, this algorithm to further improve the correction accuracy and greatly reducing correction time. And SMT machine vision correction can be obtained in a stable and efficient use
Robust surface modelling of visual hull from multiple silhouettes
Reconstructing depth information from images is one of the actively researched themes
in computer vision and its application involves most vision research areas from object
recognition to realistic visualisation. Amongst other useful vision-based reconstruction
techniques, this thesis extensively investigates the visual hull (VH) concept for volume
approximation and its robust surface modelling when various views of an object are
available. Assuming that multiple images are captured from a circular motion, projection
matrices are generally parameterised in terms of a rotation angle from a reference position
in order to facilitate the multi-camera calibration. However, this assumption is often
violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle
is hardly realisable. To address this problem, at first, this thesis proposes a calibration
method associated with the approximate circular motion.
With these modified projection matrices, a resulting VH is represented by a hierarchical
tree structure of voxels from which surfaces are extracted by the Marching
cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by
a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and
imperfect image processing or calibration result. To avoid this sensitivity, this thesis
proposes a robust surface construction algorithm which initially classifies local convex
regions from imperfect MC vertices and then aggregates local surfaces constructed by the
3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline
images to refine a coarse VH using an affine invariant region descriptor. This improves
the quality of VH when a small number of initial views is given.
In conclusion, the proposed methods achieve a 3D model with enhanced accuracy.
Also, robust surface modelling is retained when silhouette images are degraded by
practical noise
A novel approach to rainfall measuring: methodology, field test and business opportunity
Being able to measure rainfall is crucial in everyday life. The more rainfall measures are accurate, spatially distributed and detailed in time, the more forecast models - be they meteorological or hydrological - can be accurate. Safety on travel networks could be increased by informing users about the nearby roadsβ conditions in real time. In the agricultural sector, being able to gain a detailed knowledge of rainfalls would allow for an optimal management of irrigation, nutrients and phytosanitary treatments. In the sport sector, a better measurement of rainfalls for outdoor events (e.g., motor, motorcycle or bike races) would increase athletesβ safety.
Rain gauges are the most common and widely used tools for rainfall measurement. However, the existent monitoring networks still fail in providing accurate spatial representations of localized precipitation events due to the sparseness. This effect is magnified by the intrinsic nature of intense precipitation events, as they are naturally characterized by a great spatial and temporal variability.
Potentially, coupling at-ground measures (i.e., coming from pluviometric and disdrometric networks) with remote measurement (e.g., radars or meteorological satellites) could allow to describe the rainfall phenomena in a more continuous and spatially detailed way. However, this kind of approach requires that at-ground measurements are used to calibrate the remote sensors relationships, which leads us back to the dearth of ground networks diffusion. Hence the need to increase the presence of ground measures, in order to gain a better description of the events, and to make a more productive use of the remote sensing technologies.
The ambitious aim of the methodology developed in this thesis is to repurpose other sensors already available at ground (e.g., surveillance cameras, webcams, smartphones, cars, etc.) into new source of rain rate measures widely distributed over space and time.
The technology, developed to function in daylight conditions, requires that the pictures collected during rainfall events are analyzed to identify and characterize each raindrop. The process leads to an instant measurement of the rain rate associated with the captured image. To improve the robustness of the measurement, we propose to elaborate a higher number of images within a predefined time span (i.e., 12 or more pictures per minute) and to provide an averaged measure over the observed time interval.
A schematic summary of how the method works for each acquired image is represented hereinafter :
1. background removal;
2. identification of the rain drops;
3. positioning of each drop in the control volume, by using the blur effect;
4. estimation of dropsβ diameters, under the hypothesis that each drop falls at its terminal velocity;
5. rain rate estimation, as the sum of the contributions of each drop.
Different techniques for background recognition, drops detection and selection and noise reduction were investigated. Each solution has been applied to the same images sample, in order to identify the combination producing accuracy in the rainfall estimate. The best performing procedure was then validated, by applying it to a wider sample of images. Such a sample was acquired by an experimental station installed on the roof of the Laboratory of Hydraulics of the Politecnico di Torino. The sample includes rainfall events which took place between May 15th, 2016 and February 15th, 2017. Seasonal variability allowed to record events characterized by different intensity in varied light conditions.
Moreover, the technology developed during this program of research was patented (2015) and represents the heart of WaterView, spinoff of the Politecnico di Torino founded in February 2015, which is currently in charge of the further development of this technology, its dissemination, and its commercial exploitation
- β¦