72 research outputs found

    Understanding Kernel Size in Blind Deconvolution

    Full text link
    Most blind deconvolution methods usually pre-define a large kernel size to guarantee the support domain. Blur kernel estimation error is likely to be introduced, yielding severe artifacts in deblurring results. In this paper, we first theoretically and experimentally analyze the mechanism to estimation error in oversized kernel, and show that it holds even on blurry images without noises. Then to suppress this adverse effect, we propose a low rank-based regularization on blur kernel to exploit the structural information in degraded kernels, by which larger-kernel effect can be effectively suppressed. And we propose an efficient optimization algorithm to solve it. Experimental results on benchmark datasets show that the proposed method is comparable with the state-of-the-arts by accordingly setting proper kernel size, and performs much better in handling larger-size kernels quantitatively and qualitatively. The deblurring results on real-world blurry images further validate the effectiveness of the proposed method.Comment: Accepted by WACV 201

    Aggregating Long-term Sharp Features via Hybrid Transformers for Video Deblurring

    Full text link
    Video deblurring methods, aiming at recovering consecutive sharp frames from a given blurry video, usually assume that the input video suffers from consecutively blurry frames. However, in real-world blurry videos taken by modern imaging devices, sharp frames usually appear in the given video, thus making temporal long-term sharp features available for facilitating the restoration of a blurry frame. In this work, we propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation. Specifically, we first train a blur-aware detector to distinguish between sharp and blurry frames. Then, a window-based local Transformer is employed for exploiting features from neighboring frames, where cross attention is beneficial for aggregating features from neighboring frames without explicit spatial alignment. To aggregate long-term sharp features from detected sharp frames, we utilize a global Transformer with multi-scale matching capability. Moreover, our method can easily be extended to event-driven video deblurring by incorporating an event fusion module into the global Transformer. Extensive experiments on benchmark datasets demonstrate that our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality. The source code and trained models are available at https://github.com/shangwei5/STGTN.Comment: 13 pages, 11 figures, and the code is available at https://github.com/shangwei5/STGT

    Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression

    Full text link
    Bounding box regression is the crucial step in object detection. In existing methods, while â„“n\ell_n-norm loss is widely adopted for bounding box regression, it is not tailored to the evaluation metric, i.e., Intersection over Union (IoU). Recently, IoU loss and generalized IoU (GIoU) loss have been proposed to benefit the IoU metric, but still suffer from the problems of slow convergence and inaccurate regression. In this paper, we propose a Distance-IoU (DIoU) loss by incorporating the normalized distance between the predicted box and the target box, which converges much faster in training than IoU and GIoU losses. Furthermore, this paper summarizes three geometric factors in bounding box regression, \ie, overlap area, central point distance and aspect ratio, based on which a Complete IoU (CIoU) loss is proposed, thereby leading to faster convergence and better performance. By incorporating DIoU and CIoU losses into state-of-the-art object detection algorithms, e.g., YOLO v3, SSD and Faster RCNN, we achieve notable performance gains in terms of not only IoU metric but also GIoU metric. Moreover, DIoU can be easily adopted into non-maximum suppression (NMS) to act as the criterion, further boosting performance improvement. The source code and trained models are available at https://github.com/Zzh-tju/DIoU.Comment: Accepted to AAAI 2020. The source code and trained models are available at https://github.com/Zzh-tju/DIo
    • …
    corecore