985 research outputs found

    UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition

    Full text link
    Advances in image restoration and enhancement techniques have led to discussion about how such algorithmscan be applied as a pre-processing step to improve automatic visual recognition. In principle, techniques like deblurring and super-resolution should yield improvements by de-emphasizing noise and increasing signal in an input image. But the historically divergent goals of the computational photography and visual recognition communities have created a significant need for more work in this direction. To facilitate new research, we introduce a new benchmark dataset called UG^2, which contains three difficult real-world scenarios: uncontrolled videos taken by UAVs and manned gliders, as well as controlled videos taken on the ground. Over 160,000 annotated frames forhundreds of ImageNet classes are available, which are used for baseline experiments that assess the impact of known and unknown image artifacts and other conditions on common deep learning-based object classification approaches. Further, current image restoration and enhancement techniques are evaluated by determining whether or not theyimprove baseline classification performance. Results showthat there is plenty of room for algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset: https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or

    Converting Optical Videos to Infrared Videos Using Attention GAN and Its Impact on Target Detection and Classification Performance

    Get PDF
    To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. This lack of IR video datasets can be mitigated if optical-to-infrared video conversion is possible. In this paper, we present a new approach for converting optical videos to infrared videos using deep learning. The basic idea is to focus on target areas using attention generative adversarial network (attention GAN), which will preserve the fidelity of target areas. The approach does not require paired images. The performance of the proposed attention GAN has been demonstrated using objective and subjective evaluations. Most importantly, the impact of attention GAN has been demonstrated in improved target detection and classification performance using real-infrared videos

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Visual Content Privacy Protection: A Survey

    Full text link
    Vision is the most important sense for people, and it is also one of the main ways of cognition. As a result, people tend to utilize visual content to capture and share their life experiences, which greatly facilitates the transfer of information. Meanwhile, it also increases the risk of privacy violations, e.g., an image or video can reveal different kinds of privacy-sensitive information. Researchers have been working continuously to develop targeted privacy protection solutions, and there are several surveys to summarize them from certain perspectives. However, these surveys are either problem-driven, scenario-specific, or technology-specific, making it difficult for them to summarize the existing solutions in a macroscopic way. In this survey, a framework that encompasses various concerns and solutions for visual privacy is proposed, which allows for a macro understanding of privacy concerns from a comprehensive level. It is based on the fact that privacy concerns have corresponding adversaries, and divides privacy protection into three categories, based on computer vision (CV) adversary, based on human vision (HV) adversary, and based on CV \& HV adversary. For each category, we analyze the characteristics of the main approaches to privacy protection, and then systematically review representative solutions. Open challenges and future directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure

    Neuron-level dynamics of oscillatory network structure and markerless tracking of kinematics during grasping

    Get PDF
    Oscillatory synchrony is proposed to play an important role in flexible sensory-motor transformations. Thereby, it is assumed that changes in the oscillatory network structure at the level of single neurons lead to flexible information processing. Yet, how the oscillatory network structure at the neuron-level changes with different behavior remains elusive. To address this gap, we examined changes in the fronto-parietal oscillatory network structure at the neuron-level, while monkeys performed a flexible sensory-motor grasping task. We found that neurons formed separate subnetworks in the low frequency and beta bands. The beta subnetwork was active during steady states and the low frequency network during active states of the task, suggesting that both frequencies are mutually exclusive at the neuron-level. Furthermore, both frequency subnetworks reconfigured at the neuron-level for different grip and context conditions, which was mostly lost at any scale larger than neurons in the network. Our results, therefore, suggest that the oscillatory network structure at the neuron-level meets the necessary requirements for the coordination of flexible sensory-motor transformations. Supplementarily, tracking hand kinematics is a crucial experimental requirement to analyze neuronal control of grasp movements. To this end, a 3D markerless, gloveless hand tracking system was developed using computer vision and deep learning techniques. 2021-11-3
    corecore