20 research outputs found
Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis
Block-based motion estimation (ME) and compensation (MC) techniques are
widely used in modern video processing algorithms and compression systems. The
great variety of video applications and devices results in numerous compression
specifications. Specifically, there is a diversity of frame-rates and
bit-rates. In this paper, we study the effect of frame-rate and compression
bit-rate on block-based ME and MC as commonly utilized in inter-frame coding
and frame-rate up conversion (FRUC). This joint examination yields a
comprehensive foundation for comparing MC procedures in coding and FRUC. First,
the video signal is modeled as a noisy translational motion of an image. Then,
we theoretically model the motion-compensated prediction of an available and
absent frames as in coding and FRUC applications, respectively. The theoretic
MC-prediction error is further analyzed and its autocorrelation function is
calculated for coding and FRUC applications. We show a linear relation between
the variance of the MC-prediction error and temporal-distance. While the
affecting distance in MC-coding is between the predicted and reference frames,
MC-FRUC is affected by the distance between the available frames used for the
interpolation. Moreover, the dependency in temporal-distance implies an inverse
effect of the frame-rate. FRUC performance analysis considers the prediction
error variance, since it equals to the mean-squared-error of the interpolation.
However, MC-coding analysis requires the entire autocorrelation function of the
error; hence, analytic simplicity is beneficial. Therefore, we propose two
constructions of a separable autocorrelation function for prediction error in
MC-coding. We conclude by comparing our estimations with experimental results
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
We study the transfer learning process between two linear regression
problems. An important and timely special case is when the regressors are
overparameterized and perfectly interpolate their training data. We examine a
parameter transfer mechanism whereby a subset of the parameters of the target
task solution are constrained to the values learned for a related source task.
We analytically characterize the generalization error of the target task in
terms of the salient factors in the transfer learning architecture, i.e., the
number of examples available, the number of (free) parameters in each of the
tasks, the number of parameters transferred from the source to target task, and
the correlation between the two tasks. Our non-asymptotic analysis shows that
the generalization error of the target task follows a two-dimensional double
descent trend (with respect to the number of free parameters in each of the
tasks) that is controlled by the transfer learning factors. Our analysis points
to specific cases where the transfer of parameters is beneficial. Specifically,
we show that transferring a specific set of parameters that generalizes well on
the respective part of the source task can soften the demand on the task
correlation level that is required for successful transfer learning. Moreover,
we show that the usefulness of a transfer learning setting is fragile and
depends on a delicate interplay among the set of transferred parameters, the
relation between the tasks, and the true solution