17,680 research outputs found
GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data
Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data
A study of data coding technology developments in the 1980-1985 time frame, volume 2
The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals
Learned Quality Enhancement via Multi-Frame Priors for HEVC Compliant Low-Delay Applications
Networked video applications, e.g., video conferencing, often suffer from
poor visual quality due to unexpected network fluctuation and limited
bandwidth. In this paper, we have developed a Quality Enhancement Network
(QENet) to reduce the video compression artifacts, leveraging the spatial and
temporal priors generated by respective multi-scale convolutions spatially and
warped temporal predictions in a recurrent fashion temporally. We have
integrated this QENet as a standard-alone post-processing subsystem to the High
Efficiency Video Coding (HEVC) compliant decoder. Experimental results show
that our QENet demonstrates the state-of-the-art performance against default
in-loop filters in HEVC and other deep learning based methods with noticeable
objective gains in Peak-Signal-to-Noise Ratio (PSNR) and subjective gains
visually
Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation
The photo-response non-uniformity (PRNU) is a distinctive image sensor
characteristic, and an imaging device inadvertently introduces its sensor's
PRNU into all media it captures. Therefore, the PRNU can be regarded as a
camera fingerprint and used for source attribution. The imaging pipeline in a
camera, however, involves various processing steps that are detrimental to PRNU
estimation. In the context of photographic images, these challenges are
successfully addressed and the method for estimating a sensor's PRNU pattern is
well established. However, various additional challenges related to generation
of videos remain largely untackled. With this perspective, this work introduces
methods to mitigate disruptive effects of widely deployed H.264 and H.265 video
compression standards on PRNU estimation. Our approach involves an intervention
in the decoding process to eliminate a filtering procedure applied at the
decoder to reduce blockiness. It also utilizes decoding parameters to develop a
weighting scheme and adjust the contribution of video frames at the macroblock
level to PRNU estimation process. Results obtained on videos captured by 28
cameras show that our approach increases the PRNU matching metric up to more
than five times over the conventional estimation method tailored for photos
- …