94 research outputs found
Hand gesture recognition, prediction, and coding using hidden Markov models
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (leaves 56-57).by Katerina H. Nguyen.M.Eng
Perceptual video quality assessment: the journey continues!
Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future
Image quality assessment for fake biometric detection: Application to Iris, fingerprint, and face recognition
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.This work has been partially supported by projects Contexts (S2009/TIC-1485) from CAM, Bio-Shield (TEC2012-34881) from Spanish MECD, TABULA RASA (FP7-ICT-257289) and BEAT (FP7-SEC-284989) from EU, and Cátedra UAM-Telefónic
Recommended from our members
Subjective and objective quality assessment for advanced videos
The surge of video streaming services, particularly for high motion content such as sporting events, necessitates advanced techniques to maintain video quality, facing challenges such as capture artifacts and distortions during coding and transmission. The advent of High Dynamic Range (HDR) content, offering a broader and more accurate representation of brightness and color, poses additional complexities due to increased data volume. The critical need for robust Video Quality Assessment (VQA) models arises from these challenges. To meet this need, we conducted three substantial subjective quality studies and constructed corresponding databases. The Laboratory for Image and Video Engineering (LIVE) Livestream Database comprises 315 videos of 45 source sequences from 33 original contents impaired by six types of distortions. This database facilitated the gathering of over 12,000 human opinions from 40 subjects. The LIVE HDR Database, the first of its kind dedicated to HDR10 videos, includes 310 videos from 31 distinct source sequences, processed with ten different compression and resolution combinations. This resource was instrumental in amassing over 20,000 human quality judgments under two different illumination conditions. An additional LIVE HDR AQ was developed with 400 videos from 40 unique source sequences. These videos were processed using varied compression, resolution combinations, and AQ-mode settings, to study the effects of adaptive quantization (AQ) and rate-distortion optimization techniques on HDR video perceptual quality. Building on these invaluable databases, we developed two innovative objective quality models: HDRMAX and HDRGREED. HDRMAX, a pioneering framework designed to create HDR quality-sensitive features, augments the widely-deployed Video Multimethod Assessment Fusion (VMAF) model, yielding significantly improved performance on both HDR and SDR videos. HDRGREED, a novel model leveraging localized histogram equalization and Difference of Gaussian filters, employs the Generalized Gaussian Distribution to model the bandpass responses and measure the entropy variations between reference and distorted videos. This model is particularly sensitive to banding and blocking artifacts introduced by inappropriate AQ settings. In conclusion, the comprehensive subjective quality studies and databases, along with the state-of-the-art objective quality models, HDRMAX and HDRGREED, significantly contribute to the advancement of future VQA models. These tools cater specifically to challenges posed by live streaming and HDR content, providing critical resources for the development, testing, and comparison of future VQA models. These databases, publicly available for research purposes, and the innovative models offer valuable insights to improve and control the perceptual quality of streamed videos.Electrical and Computer Engineerin
- …