7 research outputs found
DynaVSR: Dynamic Adaptive Blind Video Super-Resolution
Most conventional supervised super-resolution (SR) algorithms assume that
low-resolution (LR) data is obtained by downscaling high-resolution (HR) data
with a fixed known kernel, but such an assumption often does not hold in real
scenarios. Some recent blind SR algorithms have been proposed to estimate
different downscaling kernels for each input LR image. However, they suffer
from heavy computational overhead, making them infeasible for direct
application to videos. In this work, we present DynaVSR, a novel
meta-learning-based framework for real-world video SR that enables efficient
downscaling model estimation and adaptation to the current input. Specifically,
we train a multi-frame downscaling module with various types of synthetic blur
kernels, which is seamlessly combined with a video SR network for input-aware
adaptation. Experimental results show that DynaVSR consistently improves the
performance of the state-of-the-art video SR models by a large margin, with an
order of magnitude faster inference time compared to the existing blind SR
approaches
Scene-Adaptive Video Frame Interpolation via Meta-Learning
Video frame interpolation is a challenging problem because there are
different scenarios for each video depending on the variety of foreground and
background motion, frame rate, and occlusion. It is therefore difficult for a
single network with fixed parameters to generalize across different videos.
Ideally, one could have a different network for each scenario, but this is
computationally infeasible for practical applications. In this work, we propose
to adapt the model to each video by making use of additional information that
is readily available at test time and yet has not been exploited in previous
works. We first show the benefits of `test-time adaptation' through simple
fine-tuning of a network, then we greatly improve its efficiency by
incorporating meta-learning. We obtain significant performance gains with only
a single gradient update without any additional parameters. Finally, we show
that our meta-learning framework can be easily employed to any video frame
interpolation network and can consistently improve its performance on multiple
benchmark datasets.Comment: CVPR 202
The value of political connections : evidence from Korean chaebols
This thesis examines the value of political connections for business groups by constructing a unique dataset that allows us to identify the form and extent of the connections. Results show firms' membership to family-controlled business groups (South Korean chaebol) play a key role in determining the value of political connections. Politically connected chaebol firms experience substantial price increases following the establishment of the connection than other firms, but the reverse is found for other (non-family-controlled) connected business groups
Propping and pyramids in family business groups: Evidence from Korean chaebols
Using a sample of Korean family business groups (chaebols) during the 2006-2011 period, I study the mechanism of propping through related party transactions following the 2008 financial crisis, and its effects on firm performance and investments. I find chaebols use intra-transactions to mitigate the negative effects of the crisis. Using a discrete classification of firms into four pyramidal layers, chaebol families use related party sales to prop up firms in the third layer following the crisis, perhaps at the expense of central firms. In doing so, controlling chaebol families transfer the cost of propping to outside minority shareholders
Channel Attention Is All You Need for Video Frame Interpolation
Prevailing video frame interpolation techniques rely heavily on optical flow estimation and require additional model complexity and computational cost; it is also susceptible to error propagation in challenging scenarios with large motion and heavy occlusion. To alleviate the limitation, we propose a simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component. Our algorithm employs a special feature reshaping operation, referred to as PixelShuffle, with a channel attention, which replaces the optical flow computation module. The main idea behind the design is to distribute the information in a feature map into multiple channels and extract motion information by attending the channels for pixel-level frame synthesis. The model given by this principle turns out to be effective in the presence of challenging motion and occlusion. We construct a comprehensive evaluation benchmark and demonstrate that the proposed approach achieves outstanding performance compared to the existing models with a component for optical flow computation
AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results
Videos contain various types and strengths of motions that may look unnaturally discontinuous in time when the recorded frame rate is low. This paper reviews the first AIM challenge on video temporal super-resolution (frame interpolation) with a focus on the proposed solutions and results. From low-frame-rate (15 fps) video sequences, the challenge participants are asked to submit higher-frame-rate (60 fps) video sequences by estimating temporally intermediate frames. We employ the REDS VTSR dataset derived from diverse videos captured in a hand-held camera for training and evaluation purposes. The competition had 62 registered participants, and a total of 8 teams competed in the final testing phase. The challenge winning methods achieve the state-of-the-art in video temporal super-resolution.N