2,859 research outputs found
์ค๋ณต ์ฐ์ฐ ์๋ต์ ํตํ ํจ์จ์ ์ธ ์์ ๋ฐ ๋์์ ๋ถํ ๋ชจ๋ธ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ์ตํฉ๊ณผํ๊ธฐ์ ๋ํ์ ์ตํฉ๊ณผํ๋ถ(์ง๋ฅํ์ตํฉ์์คํ
์ ๊ณต), 2021.8. ๊ณฝ๋
ธ์ค.๋ถํ ๋ชจ๋ธ์ ๋ค๋ฅธ ์ปดํจํฐ ๋น์ ๋ถ์ผ์ ๋ง์ฐฌ๊ฐ์ง๋ก ๋ฅ๋ฌ๋ ์ ๊ฒฝ๋ง์ ์ฌ์ฉํ์ฌ ๋ง์ ์ฑ๋ฅ ํฅ์์ ์ด๋ฃจ์ด๋๋ค.
์ด ๊ธฐ์ ์ AR/VR, ์์จ ์ฃผํ, ๊ฐ์ ์์คํ
๋ฑ ๋ค์ํ ์๊ฐ ์์ฉ ๋ถ์ผ์์ ์ฃผ๋ณ ์ฅ๋ฉด์ ์ดํดํ๊ณ ๋ฌผ์ฒด์ ๋ชจ์์ ์ธ์ ํ ์ ์๊ธฐ ๋๋ฌธ์ ํ์์ ์ด๋ค.
๊ทธ๋ฌ๋ ๊ธฐ์กด์ ์ ์ ๋ ๋ฐฉ๋ฒ์ ๋๋ถ๋ถ์ ๋ง์ ์ฐ์ฐ๋์ ์๊ตฌ ํ๊ธฐ ๋๋ฌธ์ ์ค์ ์์คํ
์ ๊ณง๋ฐ๋ก ์ ์ฉํ๋ ๊ฒ์ด ๋ถ๊ฐ๋ฅํ๋ค.
๋ณธ ๋
ผ๋ฌธ์ ๋ชจ๋ธ ๋ณต์ก์ฑ์ ์ค์ด๊ธฐ ์ํด ์ ์ฒด ๋ถํ ์์ญ ์ค์์ Image semantic segmentation ๋ฐ semi-supervised video object segmentation ์์ ์ฐ์ด๋ ๋ชจ๋ธ ๊ฒฝ๋ํ๋ฅผ ๋ชฉํ๋ก ํ๋ค.
์ด๋ฅผ ์ํด ๊ธฐ์กด ํ๋ ์ ์ํฌ์์ ๋ถํ์ํ ์ฐ์ฐ์ ์ง์ ํ๊ณ ์ธ ๊ฐ์ง ๊ด์ ์์ ํด๊ฒฐ๋ฐฉ๋ฒ์ ์ ์ํ๋ค.
๋จผ์ decoder์ spatial redundancy ๋ฌธ์ ์ ๋ํด ๋
ผ์ํ๋ค.
Decoder๋ upsampling์ ์ํํ์ฌ ์์ ํด์๋ feature map์ ์๋์ input image ํด์๋๋ก ๋ณต๊ตฌํ์ฌ ์ ํํ ๋ชจ์์ ๋ง์คํฌ๋ฅผ ์์ฑํ๊ณ , semantic ์ ๋ณด๋ฅผ ์ฐพ๊ธฐ ์ํด ๊ฐ ํฝ์
์ ํด๋์ค๋ฅผ ํ๋ณํ๋ค.
๊ทธ๋ฌ๋ ์ธ์ ํฝ์
๋ค์ ์ ๋ณด๋ฅผ ๊ณต์ ํ๊ณ ์๋ก ๋์ผํ ์๋ฏธ๋ฅผ ๊ฐ์ง ํ๋ฅ ์ด ๋์ผ๋ ์ด ํน์ฑ์ ๊ณ ๋ คํ ์ฐ๊ตฌ๊ฐ ์๋ค.
์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด spatial redundancy์ ์ค์ฌ decoder์ ํ๋ก์ธ์ค๋ฅผ ์ ๊ฑฐํ๋ superpixel-based sampling architecture๋ฅผ ์ ์ํ๋ค.
์ ์ ๋ ๋คํธ์ํฌ๋ ํต๊ณ์ ํ๋ก์ธ์ค ์ ์ด ๋ฐฉ๋ฒ๋ก ์ ํ์ฉํ์ฌ ๊ฐ ๋ ์ด์ด์ ํ์ต๋ฅ ์ ์ฌ์กฐ์ ํ๋ ํ์ต๋ฐฉ๋ฒ์ ํตํด ์ด ํฝ์
์ 0.37 ๋ง์ผ๋ก ํ์ต ๋ฐ ์ถ๋ก ์ ํ๋ค.
Pascal Context, SUN-RGBD ๋ฐ์ดํฐ์
์ ์ด์ฉํ ์คํ์์, ๋ค์ํ ๊ธฐ์กด ๋ฐฉ๋ฒ๋ค๊ณผ ์ ์ ๋ ๋ฐฉ๋ฒ์ ๋น๊ตํ์ฌ ์ฐ์ฐ๋์ ํจ์ฌ ๋ ์ ์ง๋ง ๋ ์ฐ์ํ๊ฑฐ๋ ๋น์ทํ ์ ํ๋๋ฅผ ๊ฐ์ง๋ ๊ฒ์ ์คํ์ ์ผ๋ก ์ฆ๋ช
ํ๋ค.
๋๋ฒ์งธ๋ก encoder ์์ ๋๋ฆฌ ์ฐ์ด๋ dilated convolution ๋ํด ๋
ผ์ํ๋ค.
Dilated convolution ์ encoder๊ฐ ํฐ receptive field๋ฅผ ์ง๋๋๋ก ํ์ฌ ๋ ๋์ ์ฑ๋ฅ์ ์ป๊ธฐ ์ํด ๋๋ฆฌ ์ฌ์ฉ๋๋ ๋ฐฉ๋ฒ๋ก ์ด๋ค.
๋ชจ๋ฐ์ผ ๋๋ฐ์ด์ค์์ ํ์ฉํ๊ธฐ ์ํด์๋ ์ฐ์ฐ๋์ ์ค์ฌ์ผ ํ๋ฉฐ, ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ ์ค ํ๋๋ depth-wise separable convolution ๋ฐฉ๋ฒ์ dilated convolution์ ์ ์ฉํ๋ ๊ฒ์ด๋ค.
๊ทธ๋ฌ๋ ์ด ๋ ๊ฐ์ง ๋ฐฉ๋ฒ์ ๊ฐ๋จํ ์กฐํฉ์ ์ง๋์น๊ฒ ๋จ์ํ ๋ ์ฐ์ฐ์ผ๋ก ์ธํด feature map์ ์ ๋ณด ์์ค์ ์ผ๊ธฐํ๊ณ , ์ด๋ก ์ธํด ์ฌ๊ฐํ ์ฑ๋ฅ ์ ํ๊ฐ ๋ํ๋๋ค.
์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ์ ๋ณด ์์ค์ ๋ณด์ํ๋ Concentrated-Comprehensive Convolution (C3)์ด๋ผ๋ ์๋ก์ด convolutional block์ ์ ์ํ๋ค.
C3-block์ ๋ค์ํ ๋ถํ ๋ชจ๋ธ์ธ (DRN, ERFnet, Enet ๋ฐ Deeplab V3)์ ์ ์ฉํ์ฌ Cityscapes์ Pascal VOC ๋ฐ์ดํฐ์
์์ ์ ์ ๋ ๋ฐฉ๋ฒ์ ์ฅ์ ์ ์คํ์ ์ผ๋ก ์ฆ๋ช
ํ๋ค.
๋ ๋ค๋ฅธ dilated convolution์ ๋ฌธ์ ๋ dilation rate ์ ๋ฐ๋ผ ๋ชจ๋ธ ์ํ์๊ฐ์ด ๋ฌ๋ผ์ง๋ ์ ์ด๋ค.
์ด๋ก ์ ์ผ๋ก dilated convolution์ dilation rate ๊ด๊ณ์์ด ์ ์ฌํ ๋ชจ๋ธ ์ํ์๊ฐ์ ๊ฐ์ ธ์ผํ์ง๋ง, ์ค์ ์ํ์๊ฐ์ด ๋๋ฐ์ด์ค์์๋ ์ต๋ 2 ๋ฐฐ๊น์ง ํฌ๊ฒ ๋ฌ๋ผ์ง๋ค.
์ด ๋ฌธ์ ๋ฅผ ์ํํ๊ธฐ ์ํด spatial squeeze (S2) block ์ด๋ผ๊ณ ํ๋ ๋ ๋ค๋ฅธ convolutional block์ ์ ์ํ๋ค.
S2-block์ ์ฅ๊ฑฐ๋ฆฌ ์ ๋ณด๋ฅผ ์ดํดํ๊ณ ๋ง์ ๊ณ์ฐ์ ์ค์ด๊ธฐ ์ํด average pooling์ ํ์ฉํ์ฌ ๊ณต๊ฐ ์ ๋ณด๋ฅผ ์์ถํ๋ค.
๋ค๋ฅธ ๊ฒฝ๋ํ ๋ถํ ๋ชจ๋ธ๊ณผ S2-block ๊ธฐ๋ฐ์ ์ ์๋ ๋ชจ๋ธ๊ณผ ์ ์ฑ ๋ฐ ์ ๋ ๋ถ์์ Cityscapes ๋ฐ์ดํฐ์
์ ์ด์ฉํ์ฌ ์ ๊ณตํ๋ค.
๋ํ ์์์ ์ฐ๊ตฌํ C3-block๊ณผ ์ฑ๋ฅ์ ๋น๊ตํ๋ฉฐ, ์ค์ ๋ชจ๋ฐ์ผ ์ฅ์น์์ ์ ์๋ ๋ชจ๋ธ์ด ์ฑ๊ณต์ ์ผ๋ก ์คํ๋๋ ๊ฒ์ ๋ณด์ฌ์ค๋ค.
์ธ๋ฒ์งธ๋ก ๋น๋์ค์์ temporal redundancy ๋ฌธ์ ์ ๋ํด ๋
ผ์ํ๋ค.
์ปดํจํฐ ๋น์ ์ ์ค์ํ ๊ธฐ์ ์ค ํ๋๋ ๋น๋์ค ๋ฐ์ดํฐ๋ฅผ ํจ์จ์ ์ผ๋ก ์ฒ๋ฆฌํ๋ ๋ฐฉ๋ฒ์ด๋ค.
Semi-supervised Video Object Segmentation (semi-VOS)์ ์ด์ ํ๋ ์์ ์ ๋ณด๋ฅผ ์ ํํ์ฌ ํ์ฌ ํ๋ ์์ ๋ํ segmentation ๋ง์คํฌ๋ฅผ ์์ฑํ๋ค.
๊ทธ๋ฌ๋ ์ด์ ์ฐ๊ตฌ๋ค์ ๋ชจ๋ ํ๋ ์์ ๋์ผํ๊ฒ ์ค์ํ๋ค๊ณ ํ๋จํ๊ณ , ๋ชจ๋ธ์ ์ ์ฒด ๋คํธ์ํฌ๋ฅผ ์ฌ์ฉํ์ฌ ๋งค ํ๋ ์๋ง๋ค ํด๋น ๋ง์คํฌ๋ฅผ ์์ฑํ๋ค.
์ด๋ฅผ ํตํด ๋ฌผ์ฒด๋ชจ์์ ๋ณํ๋ ๋ฌผ์ฒด๊ฐ ๊ฐ๋ ค์ง๋ ์ด๋ ค์ด ๋น๋์ค์์๋ ์ ํํ ๋ง์คํฌ๋ฅผ ์์ฑํ ์ ์์ผ๋, ๋ฌผ์ฒด๊ฐ ์์ง์ด์ง ์๊ฑฐ๋ ๋๋ฆฌ๊ฒ ์์ง์ฌ์ ํ๋ ์ ๊ฐ ๋ณํ๊ฐ ๊ฑฐ์ ์๋ ๊ฒฝ์ฐ์๋ ๋ถํ์ํ ๊ณ์ฐ์ด ๋ฐ์ํ๋ค.
์ ์๋ ๋ฐฉ๋ฒ์ temporal information์ ์ฌ์ฉํ์ฌ ๋ฌผ์ฒด์ ์์ง์ ์ ๋๋ฅผ ์ธก์ ํ ๋ค, ๋ณํ๊ฐ ๋ฏธ๋นํ๋ค๋ฉด ๋ฌด๊ฑฐ์ด ๋ง์คํฌ ์์ฑ ๋จ๊ณ๋ฅผ ์๋ตํ๋ค.
์ด๋ฅผ ์คํํ๊ธฐ ์ํด ํ๋ ์ ๊ฐ์ ๋ณํ๋์ ์ธก์ ํ๊ณ ํ๋ ์ ๊ฐ์ ์ ์ฌ์ฑ์ ๋ฐ๋ผ ๊ฒฝ๋ก๋ฅผ (์ ์ฒด ๋คํธ์ํฌ ๊ณ์ฐ ๋๋ ์ด์ ํ๋ ์ ๊ฒฐ๊ณผ๋ฅผ ์ฌ์ฌ์ฉ) ๊ฒฐ์ ํ๋ ์๋ก์ด ๋์ ๋คํธ์ํฌ๋ฅผ ์ ์ํ๋ค.
์ ์๋ ๋ฐฉ๋ฒ์ ๋ค์ํ semi-VOS ๋ฐ์ดํฐ์
์ (DAVIS 16, DAVIS 17 ๋ฐ YouTube-VOS) ๋ํด ์ ํ๋ ์ ํ์์ด ์ถ๋ก ์๋๋ฅผ ํฌ๊ฒ ํฅ์์ํจ๋ค.
๋ํ ์ฐ๋ฆฌ์ ์ ๊ทผ ๋ฐฉ์์ ๋ค์ํ semi-VOS ๋ฐฉ๋ฒ์ ์ ์ฉ๊ฐ๋ฅํจ์ ์คํ์ ์ผ๋ก ์ฆ๋ช
ํ๋ค.Segmentation has seen a remarkable performance advance by using deep convolution neural networks like other fields of computer vision.
This is necessary technology because we can understand surrounded scenes and recognize the shape of an object for various visual applications such as AR/VR, autonomous driving, surveillance system, etc.
However, most previous methods can not directly be used for real-world systems due to tremendous computation.
This dissertation focuses on image semantic segmentation and semi-supervised video object segmentation among various sub-fields in the overall segmentation realm to reduce model complexity.
We point out redundant operations from conventional frameworks and propose solutions from three different perspectives.
First, we discuss the spatial redundancy issue in a decoder.
The decoder conducts upsampling to recover small resolution feature maps into the original resolution to generate a sharp mask and classify each pixel for finding their semantic categories.
However, neighboring pixels share information and get the same semantic category each other, and thus we do not need independent pixel-wise computation in the decoder.
We propose superpixel-based sampling architecture to eliminate the decoder process by reducing spatial redundancy to resolve this problem.
The proposed network is trained and tested with only 0.37% of total pixels with a re-adjusting learning rate scheme by statistical process control (SPC) of gradients in each layer.
We show that our network performs better or equal accuracy comparison with various conventional methods on Pascal Context, SUN-RGBD dataset.
Second, we point out the dilated convolution in an encoder.
This is widely used for an encoder to get the advantage of a large receptive field and improve performance.
One practical choice to reduce computation for executing mobile devices is applying a depth-wise separable convolution strategy into a dilated convolution.
However, the simple combination of these two methods incurs severe performance degradation due to the loss of information in the feature map from an over-simplified operation.
We propose a new convolutional block called Concentrated-Comprehensive Convolution (C3) to compensate for the information loss to resolve this problem.
We apply the C3-block to various segmentation frameworks (DRN, ERFnet, Enet, and Deeplab V3) to prove our proposed method's beneficial properties on Cityscapes and Pascal VOC datasets.
Another issue in the dilated convolution is different latency times depending on the dilation rate.
Theoretically, the dilated convolution has a similar latency time regardless of dilation rate, but we observe that the latency time is seriously different up to 2 times.
To mitigate this issue, we devise another convolutional block called the spatial squeeze (S2) block.
S2-block utilizes an average pooling trick for squeezing spatial information to understand long-range information and reduce lots of computation.
We provide qualitative and quantitative analysis of a proposed network based on S2-block with other lightweight segmentation and compare the performance with C3-block on the Cityscapes dataset.
Also, we demonstrate that our method successfully is executed for a mobile device.
Third, we also tackle the temporal redundancy problem in video segmentation.
One of the critical techniques in computer vision is how to handle video data efficiently.
Semi-supervised Video Object Segmentation (semi-VOS) propagates information from previous frames to generate a segmentation mask for the current frame.
However, previous works treat every frame with the same importance and use a full-network path.
This generates high-quality segmentation across challenging scenarios such as shape-changing and occlusion.
However, it also leads to unnecessary computations for stationary or slow-moving objects where the change across frames is little.
In this work, we exploit this observation by using temporal information to quickly identify frames with little change and skip the heavyweight mask generation step.
To realize this efficiency, we propose a novel dynamic network that estimates change across frames and decides which path -- computing a full network or reusing the previous frame's feature -- to choose depending on the expected similarity.
Experimental results show that our approach significantly improves inference speed without much accuracy degradation on challenging semi-VOS datasets -- DAVIS 16, DAVIS 17, and YouTube-VOS.
Furthermore, our approach can be applied to multiple semi-VOS methods demonstrating its generality.1 Introduction 1
1.1 Challenging Problem 3
1.1.1 Semantic Segmentation 3
1.1.2 Semi-supervised Video Object Segmentation 6
1.2 Contribution 8
1.2.1 Reducing Spatial Redundancy in Decoder 8
1.2.2 Beyond Dilated Convolution 9
1.2.3 Reducing Temporal Redundancy in Semi-supervised Video Object Segmentation 10
1.3 Outline 11
2 Related Work 12
2.1 Decoder for Segmentation 12
2.2 Feature Extraction for Segmentation Encoder 14
2.3 Tracking Target for Video Object Segmentation 16
2.3.1 Mask Propagation 16
2.3.2 Online-learning 16
2.3.3 Template Matching 16
2.4 Reducing Computation for Deep Learning Networks 17
2.4.1 Convolution Factorization 17
2.4.2 Dynamic Network 18
2.5 Datasets and Measurements 19
2.5.1 Image Semantic Segmentation 19
2.5.2 Video Object Segmentation 19
2.5.3 Measurement 20
3 Reducing Spatial Redundancy in Decoder via Sampling based on Superpixel 22
3.1 Relate Work 25
3.2 Sampling Method Based on Superpixel for Train and Test 27
3.3 Details of Remapping Feature Map 28
3.4 Re-adjusting Learning Rates 30
3.5 Experiments 33
3.5.1 Implementation details 33
3.5.2 Pascal Context Benchmark Experiments 33
3.5.3 Analysis of the Number of Superpixel 35
3.5.4 SUN-RGBD Benchmark Experiments 37
4 Beyond Dilated Convolution for Better Lightweight Encoder 39
4.1 Relate Work 41
4.2 Rethinking about Property of Dilated Convolutions 42
4.3 Concentrated-Comprehensive Convolution 45
4.4 Experiments of C3 47
4.4.1 Ablation Study on C3 based on ESPNet 47
4.4.2 Evaluation on Cityscapes with Other Models 52
4.4.3 Evaluation on PASCAL VOC with Other Models 54
4.5 Rethinking about Speed of Dilated Convolutions and Multi-branches Structures 55
4.6 Spatial Squeeze Block 56
4.6.1 Overall Structure 58
4.7 Experiments of S2 61
4.7.1 Evaluation Results on the EG1800 Dataset 62
4.7.2 Ablation Study 64
4.8 Comparison between C3 and S2 64
4.8.1 Evaluation Results on the Cityscapes Dataset 65
5 Reducing Temporal Redundancy in Semi-supervised Video Object Segmentation via Dynamic Inference Framework 69
5.1 Relate Work 73
5.2 Online-learning for Semi-supervised Video Object Segmentation 74
5.2.1 Brief Explanation of Baseline Architecture 74
5.2.2 Our Dynamic Inference Framework 76
5.3 Quantifying Movement for Recognizing Temporal Redundancy 78
5.3.1 Details of Template Matching 80
5.4 Reusing Previous Feature Map 83
5.5 Extend to General Semi-supervised Video Object Segmentation 84
5.6 Gate Probability Loss 87
5.7 Experiment 89
5.7.1 DAVIS Benchmark Result 90
5.7.2 Ablation Study 93
5.7.3 YouTube-VOS Result 100
5.7.4 Qualitative Examples 102
6 Conclusion 105
6.1 Summary 105
6.2 Limitations 108
6.3 Future Works 109
Abstract (In Korean) 129
๊ฐ์ฌ์ ๊ธ 132๋ฐ
Adaptive Temporal Encoding Network for Video Instance-level Human Parsing
Beyond the existing single-person and multiple-person human parsing tasks in
static images, this paper makes the first attempt to investigate a more
realistic video instance-level human parsing that simultaneously segments out
each person instance and parses each instance into more fine-grained parts
(e.g., head, leg, dress). We introduce a novel Adaptive Temporal Encoding
Network (ATEN) that alternatively performs temporal encoding among key frames
and flow-guided feature propagation from other consecutive frames between two
key frames. Specifically, ATEN first incorporates a Parsing-RCNN to produce the
instance-level parsing result for each key frame, which integrates both the
global human parsing and instance-level human segmentation into a unified
model. To balance between accuracy and efficiency, the flow-guided feature
propagation is used to directly parse consecutive frames according to their
identified temporal consistency with key frames. On the other hand, ATEN
leverages the convolution gated recurrent units (convGRU) to exploit temporal
changes over a series of key frames, which are further used to facilitate the
frame-level instance-level parsing. By alternatively performing direct feature
propagation between consistent frames and temporal encoding network among key
frames, our ATEN achieves a good balance between frame-level accuracy and time
efficiency, which is a common crucial problem in video object segmentation
research. To demonstrate the superiority of our ATEN, extensive experiments are
conducted on the most popular video segmentation benchmark (DAVIS) and a newly
collected Video Instance-level Parsing (VIP) dataset, which is the first video
instance-level human parsing dataset comprised of 404 sequences and over 20k
frames with instance-level and pixel-wise annotations.Comment: To appear in ACM MM 2018. Code link:
https://github.com/HCPLab-SYSU/ATEN. Dataset link: http://sysu-hcp.net/li
A Survey on Deep Learning Technique for Video Segmentation
Video segmentation -- partitioning video frames into multiple segments or
objects -- plays a critical role in a broad range of practical applications,
from enhancing visual effects in movie, to understanding scenes in autonomous
driving, to creating virtual background in video conferencing. Recently, with
the renaissance of connectionism in computer vision, there has been an influx
of deep learning based approaches for video segmentation that have delivered
compelling performance. In this survey, we comprehensively review two basic
lines of research -- generic object segmentation (of unknown categories) in
videos, and video semantic segmentation -- by introducing their respective task
settings, background concepts, perceived need, development history, and main
challenges. We also offer a detailed overview of representative literature on
both methods and datasets. We further benchmark the reviewed methods on several
well-known datasets. Finally, we point out open issues in this field, and
suggest opportunities for further research. We also provide a public website to
continuously track developments in this fast advancing field:
https://github.com/tfzhou/VS-Survey.Comment: Accepted by TPAMI. Website: https://github.com/tfzhou/VS-Surve
MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
Video object segmentation (VOS) aims at segmenting a particular object
throughout the entire video clip sequence. The state-of-the-art VOS methods
have achieved excellent performance (e.g., 90+% J&F) on existing datasets.
However, since the target objects in these existing datasets are usually
relatively salient, dominant, and isolated, VOS under complex scenes has rarely
been studied. To revisit VOS and make it more applicable in the real world, we
collect a new VOS dataset called coMplex video Object SEgmentation (MOSE) to
study the tracking and segmenting objects in complex environments. MOSE
contains 2,149 video clips and 5,200 objects from 36 categories, with 431,725
high-quality object segmentation masks. The most notable feature of MOSE
dataset is complex scenes with crowded and occluded objects. The target objects
in the videos are commonly occluded by others and disappear in some frames. To
analyze the proposed MOSE dataset, we benchmark 18 existing VOS methods under 4
different settings on the proposed MOSE dataset and conduct comprehensive
comparisons. The experiments show that current VOS algorithms cannot well
perceive objects in complex scenes. For example, under the semi-supervised VOS
setting, the highest J&F by existing state-of-the-art VOS methods is only 59.4%
on MOSE, much lower than their ~90% J&F performance on DAVIS. The results
reveal that although excellent performance has been achieved on existing
benchmarks, there are unresolved challenges under complex scenes and more
efforts are desired to explore these challenges in the future. The proposed
MOSE dataset has been released at https://henghuiding.github.io/MOSE.Comment: MOSE Dataset Repor
- โฆ