1,801 research outputs found
Recommended from our members
Intelligent image cropping and scaling
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 2011.Nowadays, there exist a huge number of end devices with different screen properties for
watching television content, which is either broadcasted or transmitted over the internet.
To allow best viewing conditions on each of these devices, different image formats have
to be provided by the broadcaster. Producing content for every single format is,
however, not applicable by the broadcaster as it is much too laborious and costly.
The most obvious solution for providing multiple image formats is to produce one high resolution format and prepare formats of lower resolution from this. One possibility to do this is to simply scale video images to the resolution of the target image format. Two significant drawbacks are the loss of image details through ownscaling and possibly unused image areas due to letter- or pillarboxes. A preferable solution is to find the contextual most important region in the high-resolution format at first and crop this area with an aspect ratio of the target image format afterwards. On the other hand, defining
the contextual most important region manually is very time consuming. Trying to apply that to live productions would be nearly impossible. Therefore, some approaches exist that automatically define cropping areas. To do so, they extract visual features, like moving reas in a video, and define regions of interest
(ROIs) based on those. ROIs are finally used to define an enclosing cropping area. The
extraction of features is done without any knowledge about the type of content. Hence,
these approaches are not able to distinguish between features that might be important in
a given context and those that are not.
The work presented within this thesis tackles the problem of extracting visual features based on prior knowledge about the content. Such knowledge is fed into the system in form of metadata that is available from TV production environments. Based on the
extracted features, ROIs are then defined and filtered dependent on the analysed
content. As proof-of-concept, this application finally adapts SDTV (Standard Definition Television) sports productions automatically to image formats with lower resolution through intelligent cropping and scaling. If no content information is available, the system can still be applied on any type of content through a default mode. The presented approach is based on the principle of a plug-in system. Each plug-in
represents a method for analysing video content information, either on a low level by
extracting image features or on a higher level by processing extracted ROIs. The
combination of plug-ins is determined by the incoming descriptive production metadata
and hence can be adapted to each type of sport individually. The application has been comprehensively evaluated by comparing the results of the system against alternative cropping methods. This evaluation utilised videos which were manually cropped by a professional video editor, statically cropped videos and simply scaled, non-cropped videos. In addition to and apart from purely subjective evaluations,
the gaze positions of subjects watching sports videos have been measured and compared
to the regions of interest positions extracted by the system
Quality-Oriented Perceptual HEVC Based on the Spatiotemporal Saliency Detection Model
Perceptual video coding (PVC) can provide a lower bitrate with the same visual quality compared with traditional H.265/high efficiency video coding (HEVC). In this work, a novel H.265/HEVC-compliant PVC framework is proposed based on the video saliency model. Firstly, both an effective and efficient spatiotemporal saliency model is used to generate a video saliency map. Secondly, a perceptual coding scheme is developed based on the saliency map. A saliency-based quantization control algorithm is proposed to reduce the bitrate. Finally, the simulation results demonstrate that the proposed perceptual coding scheme shows its superiority in objective and subjective tests, achieving up to a 9.46% bitrate reduction with negligible subjective and objective quality loss. The advantage of the proposed method is the high quality adapted for a high-definition video application
Video Saliency Detection by using an Enhance Methodology Involving a Combination of 3DCNN with Histograms
When watching pictures or videos, the Human Visual System has the potential to concentrate on important locations. Saliency detection is a tool for detecting the abnormality and randomness of images or videos by replicating the human visual system. Video saliency detection has received a lot of attention in recent decades, but due to challenging temporal abstraction and fusion for spatial saliency, computational modelling of spatial perception for video sequences is still limited.Unlike methods for detection of salient objects in still images, one of the most difficult aspects of video saliency detection is figuring out how to isolate and integrate spatial and temporal features.Saliency detection, which is basically a tool to recognize areas in images and videos that catch the attention of the human visual system, may benefit multimedia applications such as video or image retrieval, copy detection, and so on. As the two crucial steps in trajectory-based video classification methods are feature point identification and local feature extraction. We suggest a new spatio-temporal saliency detection using an enhanced 3D Conventional neural network with an inclusion of histogram for optical and orient gradient in this paper
Semantic-Constraint Matching Transformer for Weakly Supervised Object Localization
Weakly supervised object localization (WSOL) strives to learn to localize
objects with only image-level supervision. Due to the local receptive fields
generated by convolution operations, previous CNN-based methods suffer from
partial activation issues, concentrating on the object's discriminative part
instead of the entire entity scope. Benefiting from the capability of the
self-attention mechanism to acquire long-range feature dependencies, Vision
Transformer has been recently applied to alleviate the local activation
drawbacks. However, since the transformer lacks the inductive localization bias
that are inherent in CNNs, it may cause a divergent activation problem
resulting in an uncertain distinction between foreground and background. In
this work, we proposed a novel Semantic-Constraint Matching Network (SCMN) via
a transformer to converge on the divergent activation. Specifically, we first
propose a local patch shuffle strategy to construct the image pairs, disrupting
local patches while guaranteeing global consistency. The paired images that
contain the common object in spatial are then fed into the Siamese network
encoder. We further design a semantic-constraint matching module, which aims to
mine the co-object part by matching the coarse class activation maps (CAMs)
extracted from the pair images, thus implicitly guiding and calibrating the
transformer network to alleviate the divergent activation. Extensive
experimental results conducted on two challenging benchmarks, including
CUB-200-2011 and ILSVRC datasets show that our method can achieve the new
state-of-the-art performance and outperform the previous method by a large
margin
- …