3,478 research outputs found

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Processing Color in Astronomical Imagery

    Get PDF
    Every year, hundreds of images from telescopes on the ground and in space are released to the public, making their way into popular culture through everything from computer screens to postage stamps. These images span the entire electromagnetic spectrum from radio waves to infrared light to X-rays and gamma rays, a majority of which is undetectable to the human eye without technology. Once these data are collected, one or more specialists must process the data to create an image. Therefore, the creation of astronomical imagery involves a series of choices. How do these choices affect the comprehension of the science behind the images? What is the best way to represent data to a non-expert? Should these choices be based on aesthetics, scientific veracity, or is it possible to satisfy both? This paper reviews just one choice out of the many made by astronomical image processors: color. The choice of color is one of the most fundamental when creating an image taken with modern telescopes. We briefly explore the concept of the image as translation, particularly in the case of astronomical images from invisible portions of the electromagnetic spectrum. After placing modern astronomical imagery and photography in general in the context of its historical beginnings, we review the standards (or lack thereof) in making the basic choice of color. We discuss the possible implications for selecting one color palette over another in the context of the appropriateness of using these images as science communication products with a specific focus on how the non-expert perceives these images and how that affects their trust in science. Finally, we share new data sets that begin to look at these issues in scholarly research and discuss the need for a more robust examination of this and other related topics in the future to better understand the implications for science communications.Comment: 10 pages, 6 figures, published in Studies in Media and Communicatio
    • …
    corecore