215,755 research outputs found

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    An automatic technique for visual quality classification for MPEG-1 video

    Get PDF
    The Centre for Digital Video Processing at Dublin City University developed Fischlar [1], a web-based system for recording, analysis, browsing and playback of digitally captured television programs. One major issue for Fischlar is the automatic evaluation of video quality in order to avoid processing and storage of corrupted data. In this paper we propose an automatic classification technique that detects the video content quality in order to provide a decision criterion for the processing and storage stages

    A bottom–up model of spatial attention predicts human error patterns in rapid scene recognition

    Get PDF
    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom–up visual processing (attentional selection and/or recognition) or top–down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of “surprise” in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences

    A perceptual hash function to store and retrieve large scale DNA sequences

    Full text link
    This paper proposes a novel approach for storing and retrieving massive DNA sequences.. The method is based on a perceptual hash function, commonly used to determine the similarity between digital images, that we adapted for DNA sequences. Perceptual hash function presented here is based on a Discrete Cosine Transform Sign Only (DCT-SO). Each nucleotide is encoded as a fixed gray level intensity pixel and the hash is calculated from its significant frequency characteristics. This results to a drastic data reduction between the sequence and the perceptual hash. Unlike cryptographic hash functions, perceptual hashes are not affected by "avalanche effect" and thus can be compared. The similarity distance between two hashes is estimated with the Hamming Distance, which is used to retrieve DNA sequences. Experiments that we conducted show that our approach is relevant for storing massive DNA sequences, and retrieving them

    Cellular neural networks for motion estimation and obstacle detection

    Get PDF
    Obstacle detection is an important part of Video Processing because it is indispensable for a collision prevention of autonomously navigating moving objects. For example, vehicles driving without human guidance need a robust prediction of potential obstacles, like other vehicles or pedestrians. Most of the common approaches of obstacle detection so far use analytical and statistical methods like motion estimation or generation of maps. In the first part of this contribution a statistical algorithm for obstacle detection in monocular video sequences is presented. The proposed procedure is based on a motion estimation and a planar world model which is appropriate to traffic scenes. The different processing steps of the statistical procedure are a feature extraction, a subsequent displacement vector estimation and a robust estimation of the motion parameters. Since the proposed procedure is composed of several processing steps, the error propagation of the successive steps often leads to inaccurate results. In the second part of this contribution it is demonstrated, that the above mentioned problems can be efficiently overcome by using Cellular Neural Networks (CNN). It will be shown, that a direct obstacle detection algorithm can be easily performed, based only on CNN processing of the input images. Beside the enormous computing power of programmable CNN based devices, the proposed method is also very robust in comparison to the statistical method, because is shows much less sensibility to noisy inputs. Using the proposed approach of obstacle detection in planar worlds, a real time processing of large input images has been made possible

    Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution

    Full text link
    Image and video quality in Long Range Observation Systems (LOROS) suffer from atmospheric turbulence that causes small neighbourhoods in image frames to chaotically move in different directions and substantially hampers visual analysis of such image and video sequences. The paper presents a real-time algorithm for perfecting turbulence degraded videos by means of stabilization and resolution enhancement. The latter is achieved by exploiting the turbulent motion. The algorithm involves generation of a reference frame and estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame; segmentation of the displacement map into two classes: stationary and moving objects and resolution enhancement of stationary objects, while preserving real motion. Experiments with synthetic and real-life sequences have shown that the enhanced videos, generated in real time, exhibit substantially better resolution and complete stabilization for stationary objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma de Mallorca, Spai
    corecore