135 research outputs found
Low complexity video compression using moving edge detection based on DCT coefficients
In this paper, we propose a new low complexity video compression method based on detecting blocks containing moving edges us- ing only DCT coe±cients. The detection, whilst being very e±cient, also allows e±cient motion estimation by constraining the search process to moving macro-blocks only. The encoders PSNR is degraded by 2dB com- pared to H.264/AVC inter for such scenarios, whilst requiring only 5% of the execution time. The computational complexity of our approach is comparable to that of the DISCOVER codec which is the state of the art low complexity distributed video coding. The proposed method ¯nds blocks with moving edge blocks and processes only selected blocks. The approach is particularly suited to surveillance type scenarios with a static camera
FOVQA: Blind Foveated Video Quality Assessment
Previous blind or No Reference (NR) video quality assessment (VQA) models
largely rely on features drawn from natural scene statistics (NSS), but under
the assumption that the image statistics are stationary in the spatial domain.
Several of these models are quite successful on standard pictures. However, in
Virtual Reality (VR) applications, foveated video compression is regaining
attention, and the concept of space-variant quality assessment is of interest,
given the availability of increasingly high spatial and temporal resolution
contents and practical ways of measuring gaze direction. Distortions from
foveated video compression increase with increased eccentricity, implying that
the natural scene statistics are space-variant. Towards advancing the
development of foveated compression / streaming algorithms, we have devised a
no-reference (NR) foveated video quality assessment model, called FOVQA, which
is based on new models of space-variant natural scene statistics (NSS) and
natural video statistics (NVS). Specifically, we deploy a space-variant
generalized Gaussian distribution (SV-GGD) model and a space-variant
asynchronous generalized Gaussian distribution (SV-AGGD) model of mean
subtracted contrast normalized (MSCN) coefficients and products of neighboring
MSCN coefficients, respectively. We devise a foveated video quality predictor
that extracts radial basis features, and other features that capture
perceptually annoying rapid quality fall-offs. We find that FOVQA achieves
state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database, as
compared with other leading FIQA / VQA models. we have made our implementation
of FOVQA available at: http://live.ece.utexas.edu/research/Quality/FOVQA.zip
Foveated Video Streaming for Cloud Gaming
Video gaming is generally a computationally intensive application and to provide a pleasant user experience specialized hardware like Graphic Processing Units may be required. Computational resources and power consumption are constraints which limit visually complex gaming on, for example, laptops, tablets and smart phones. Cloud gaming may be a possible approach towards providing a pleasant gaming experience on thin clients which have limited computational and energy resources. In a cloud gaming architecture, the game-play video is rendered and encoded in the cloud and streamed to a client where it is displayed. User inputs are captured at the client and streamed back to the server, where they are relayed to the game. High quality of experience requires the streamed video to be of high visual quality which translates to substantial downstream bandwidth requirements. The visual perception of the human eye is non-uniform, being maximum along the optical axis of the eye and dropping off rapidly away from it. This phenomenon, called foveation, makes the practice of encoding all areas of a video frame with the same resolution wasteful.
In this thesis, foveated video streaming from a cloud gaming server to a cloud gaming client is investigated. A prototype cloud gaming system with foveated video streaming is implemented. The cloud gaming server of the prototype is configured to encode gameplay video in a foveated fashion based on gaze location data provided by the cloud gaming client. The effect of foveated encoding on the output bitrate of the streamed video is investigated. Measurements are performed using games from various genres and with different player points of view to explore changes in video bitrate with different parameters of foveation. Latencies involved in foveated video streaming for cloud gaming, including latency of the eye tracker used in the thesis, are also briefly discussed
Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position
A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309
Perceptually-aware bilateral filtering for quality improvement in low bit rate video coding
Proceedings of: Picture Coding Symposium (PCS 2012). Krakow, Poland, May 7-9, 2012Perceptual coding has become of great interest in modern video coding due to the need for higher compression rates. Many previous works have been carried out to incorporate perceptual information to hybrid video encoders, either modifying the quantization parameter according to a certain perceptual resource allocation map or preprocessing video sequences for removing information that is not perceptually relevant. The first strategy is limited by the presence of blocking artifacts and the second one lacks of adaptation to video content. In this paper, a novel and simple approach is proposed, which performs a smart filtering prior to the encoding process preserving both the structural and motion information. The experiments prove that the use of proposed method implemented on an H.264 encoder significantly improves its perceptual quality for low bit rates.Publicad
Recommended from our members
Foveated object recognition by corner search
textHere we describe a gray scale object recognition system based on foveated corner finding, the computation of sequential fixation points, and elements of Lowe’s SIFT transform. The system achieves rotational, transformational, and limited scale invariant object recognition that produces recognition decisions using data extracted from sequential fixation points. It is broken into two logical steps. The first is to develop principles of foveated visual search and automated fixation selection to accomplish corner search. The result is a new algorithm for finding corners which is also a corner-based algorithm for aiming computed foveated visual fixations. In the algorithm, long saccades move the fovea to previously unexplored areas of the image, while short saccades improve the accuracy of putative corner locations. The system is tested on two natural scenes. As an interesting comparison study we compare fixations generated by the algorithm with those of subjects viewing the same images, whose eye movements are being recorded by an eyetracker. The comparison of fixation patterns is made using an information-theoretic measure. Results show that the algorithm is a good locator of corners, but does not correlate particularly well with human visual fixations. The second step is to use the corners located, which meet certain goodness criteria, as keypoints in a modified version of the SIFT algorithm. Two scales are implemented. This implementation creates a database of SIFT features of known objects. To recognize an unknown object, a corner is located and a feature vector created. The feature vector is compared with those in the database of known objects. The process is continued for each corner in the unknown object until enough information has been accumulated to reach a decision. The system was tested on 78 gray scale objects, hand tools and airplanes, and shown to perform well.Electrical and Computer Engineerin
- …