324 research outputs found

    Spatiotemporal Video Quality Assessment Method via Multiple Feature Mappings

    Get PDF
    Progressed video quality assessment (VQA) methods aim to evaluate the perceptual quality of videos in many applications but often prompt to increase computational complexity. Problems derive from the complexity of the distorted videos that are of significant concern in the communication industry, as well as the spatial-temporal content of the two-fold (spatial and temporal) distortion. Therefore, the findings of the study indicate that the information in the spatiotemporal slice (STS) images are useful in measuring video distortion. This paper mainly focuses on developing on a full reference video quality assessment algorithm estimator that integrates several features of spatiotemporal slices (STSS) of frames to form a high-performance video quality. This research work aims to evaluate video quality by utilizing several VQA databases by the following steps: (1) we first arrange the reference and test video sequences into a spatiotemporal slice representation. A collection of spatiotemporal feature maps were computed on each reference-test video. These response features are then processed by using a Structural Similarity (SSIM) to form a local frame quality.  (2) To further enhance the quality assessment, we combine the spatial feature maps with the spatiotemporal feature maps and propose the VQA model, named multiple map similarity feature deviation (MMSFD-STS). (3) We apply a sequential pooling strategy to assemble the quality indices of frames in the video quality scoring. (4) Extensive evaluations on video quality databases show that the proposed VQA algorithm achieves better/competitive performance as compared with other state- of- the- art methods

    Automated Complexity-Sensitive Image Fusion

    Get PDF
    To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties

    Quality Assessment of In-the-Wild Videos

    Full text link
    Quality assessment of in-the-wild videos is a challenging problem because of the absence of reference videos and shooting distortions. Knowledge of the human visual system can help establish methods for objective quality assessment of in-the-wild videos. In this work, we show two eminent effects of the human visual system, namely, content-dependency and temporal-memory effects, could be used for this purpose. We propose an objective no-reference video quality assessment method by integrating both effects into a deep neural network. For content-dependency, we extract features from a pre-trained image classification neural network for its inherent content-aware property. For temporal-memory effects, long-term dependencies, especially the temporal hysteresis, are integrated into the network with a gated recurrent unit and a subjectively-inspired temporal pooling layer. To validate the performance of our method, experiments are conducted on three publicly available in-the-wild video quality assessment databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm, respectively. Experimental results demonstrate that our proposed method outperforms five state-of-the-art methods by a large margin, specifically, 12.39%, 15.71%, 15.45%, and 18.09% overall performance improvements over the second-best method VBLIINDS, in terms of SROCC, KROCC, PLCC and RMSE, respectively. Moreover, the ablation study verifies the crucial role of both the content-aware features and the modeling of temporal-memory effects. The PyTorch implementation of our method is released at https://github.com/lidq92/VSFA.Comment: 9 pages, 7 figures, 4 tables. ACM Multimedia 2019 camera ready. -> Update alignment formatting of Table

    Higher level techniques for the artistic rendering of images and video

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A framework based on Gaussian mixture models and Kalman filters for the segmentation and tracking of anomalous events in shipboard video

    Get PDF
    Anomalous indications in monitoring equipment on board U.S. Navy vessels must be handled in a timely manner to prevent catastrophic system failure. The development of sensor data analysis techniques to assist a ship\u27s crew in monitoring machinery and summon required ship-to-shore assistance is of considerable benefit to the Navy. In addition, the Navy has a large interest in the development of distance support technology in its ongoing efforts to reduce manning on ships. In this thesis, algorithms have been developed for the detection of anomalous events that can be identified from the analysis of monochromatic stationary ship surveillance video streams. The specific anomalies that we have focused on are the presence and growth of smoke and fire events inside the frames of the video stream. The algorithm consists of the following steps. First, a foreground segmentation algorithm based on adaptive Gaussian mixture models is employed to detect the presence of motion in a scene. The algorithm is adapted to emphasize gray-level characteristics related to smoke and fire events in the frame. Next, shape discriminant features in the foreground are enhanced using morphological operations. Following this step, the anomalous indication is tracked between frames using Kalman filtering. Finally, gray level shape and motion features corresponding to the anomaly are subjected to principal component analysis and classified using a multilayer perceptron neural network. The algorithm is exercised on 68 video streams that include the presence of anomalous events (such as fire and smoke) and benign/nuisance events (such as humans walking the field of view). Initial results show that the algorithm is successful in detecting anomalies in video streams, and is suitable for application in shipboard environments
    corecore