24 research outputs found

    MTRNet++: One-stage Mask-based Scene Text Eraser

    Full text link
    A precise, controllable, interpretable and easily trainable text removal approach is necessary for both user-specific and large-scale text removal applications. To achieve this, we propose a one-stage mask-based text inpainting network, MTRNet++. It has a novel architecture that includes mask-refine, coarse-inpainting and fine-inpainting branches, and attention blocks. With this architecture, MTRNet++ can remove text either with or without an external mask. It achieves state-of-the-art results on both the Oxford and SCUT datasets without using external ground-truth masks. The results of ablation studies demonstrate that the proposed multi-branch architecture with attention blocks is effective and essential. It also demonstrates controllability and interpretability.Comment: This paper is under CVIU review (after major revision

    Influence of Rain on Vision-Based Algorithms in the Automotive Domain

    Full text link
    The Automotive domain is a highly regulated domain with stringent requirements that characterize automotive systems’ performance and safety. Automotive applications are required to operate under all driving conditions and meet high levels of safety standards. Vision-based systems in the automotive domain are accordingly required to operate at all weather conditions, favorable or adverse. Rain is one of the most common types of adverse weather conditions that reduce quality images used in vision-based algorithms. Rain can be observed in an image in two forms, falling rain streaks or adherent raindrops. Both forms corrupt the input images and degrade the performance of vision-based algorithms. This dissertation describes the work we did to study the effect of rain on the quality images and the target vision systems that use them as the main input. To study falling rain, we developed a framework for simulating failing rain streaks. We also developed a de-raining algorithm that detects and removes rain streaks from the images. We studied the relation between image degradation due to adherent raindrops and the performance of the target vision algorithm and provided quantitive metrics to describe such a relation. We developed an adherent raindrop simulator that generates synthetic rained images, by adding generated raindrops to rain-free images. We used this simulator to generate rained image datasets, which we used to train some vision algorithms and evaluate the feasibility of using transfer-learning to improve DNN-based vision algorithms to improve performance under rainy conditions.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/170924/1/Yazan Hamzeh final dissertation.pdfDescription of Yazan Hamzeh final dissertation.pdf : Dissertatio

    Adherent raindrop detection and removal in video

    Get PDF
    Abstract Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatiotemporal derivatives of raindrops. First, it detects raindrops based on the motion and the intensity temporal derivatives of the input video. Second, relying on an analysis that some areas of a raindrop completely occludes the scene, yet the remaining areas occludes only partially, the method removes the two types of areas separately. For partially occluding areas, it restores them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity change. For completely occluding areas, it recovers them by using a video completion technique. Experimental results using various real videos show the effectiveness of the proposed method

    Real-time precipitation suppression in video streams

    Get PDF
    In surveillance cameras rain and snow can introduce an unwelcome noise to the video stream. The resulting effect of the rain becomes bright streaks in the frames of the video. These streaks can disturb human viewers and image processing algorithms. Rain streaks can be hard to detect and remove as they are a very dynamic phenomenon dependent on camera settings and weather conditions. This thesis aims to research some already invented rain removal algorithms and compare and evaluate them. Surveillance cameras supply video in real-time so it is not possible to access the whole video and perform heavy computations relying on information from the future. Qualities such as level of streak suppression and time required to perform necessary calculations are weighed against each other
    corecore