9 research outputs found

    Learning to Forget for Meta-Learning

    Full text link
    Few-shot learning is a challenging problem where the goal is to achieve generalization from only few examples. Model-agnostic meta-learning (MAML) tackles the problem by formulating prior knowledge as a common initialization across tasks, which is then used to quickly adapt to unseen tasks. However, forcibly sharing an initialization can lead to conflicts among tasks and the compromised (undesired by tasks) location on optimization landscape, thereby hindering the task adaptation. Further, we observe that the degree of conflict differs among not only tasks but also layers of a neural network. Thus, we propose task-and-layer-wise attenuation on the compromised initialization to reduce its influence. As the attenuation dynamically controls (or selectively forgets) the influence of prior knowledge for a given task and each layer, we name our method as L2F (Learn to Forget). The experimental results demonstrate that the proposed method provides faster adaptation and greatly improves the performance. Furthermore, L2F can be easily applied and improve other state-of-the-art MAML-based frameworks, illustrating its simplicity and generalizability.Comment: CVPR 2020. Code at https://github.com/baiksung/L2

    Scene-Adaptive Video Frame Interpolation via Meta-Learning

    Full text link
    Video frame interpolation is a challenging problem because there are different scenarios for each video depending on the variety of foreground and background motion, frame rate, and occlusion. It is therefore difficult for a single network with fixed parameters to generalize across different videos. Ideally, one could have a different network for each scenario, but this is computationally infeasible for practical applications. In this work, we propose to adapt the model to each video by making use of additional information that is readily available at test time and yet has not been exploited in previous works. We first show the benefits of `test-time adaptation' through simple fine-tuning of a network, then we greatly improve its efficiency by incorporating meta-learning. We obtain significant performance gains with only a single gradient update without any additional parameters. Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.Comment: CVPR 202

    Batch Normalization Tells You Which Filter is Important

    Full text link
    The goal of filter pruning is to search for unimportant filters to remove in order to make convolutional neural networks (CNNs) efficient without sacrificing the performance in the process. The challenge lies in finding information that can help determine how important or relevant each filter is with respect to the final output of neural networks. In this work, we share our observation that the batch normalization (BN) parameters of pre-trained CNNs can be used to estimate the feature distribution of activation outputs, without processing of training data. Upon observation, we propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs. The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance with and without fine-tuning in terms of the trade-off between the accuracy drop and the reduction in computational complexity and number of parameters of pruned networks

    DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks

    Full text link
    Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs. However, existing works either suffer from a severe performance drop in ultra-low precision of 4 or lower bit-widths, or require a heavy fine-tuning process to recover the performance. To our knowledge, this vulnerability to low precisions relies on two statistical observations of feature map values. First, distribution of feature map values varies significantly per channel and per input image. Second, feature maps have outliers that can dominate the quantization error. Based on these observations, we propose a novel distribution-aware quantization scheme (DAQ) which facilitates accurate training-free quantization in ultra-low precision. A simple function of DAQ determines dynamic range of feature maps and weights with low computational burden. Furthermore, our method enables mixed-precision quantization by calculating the relative sensitivity of each channel, without any training process involved. Nonetheless, quantization-aware training is also applicable for auxiliary performance gain. Our new method outperforms recent training-free and even training-based quantization methods to the state-of-the-art image super-resolution networks in ultra-low precision.Comment: WACV 202

    A Polysomnography Study of Snoring and Obstructive Sleep Apnea in Relation to Chronic Bronchitis

    No full text
    Background and Objective Reportedly, snoring is associated with chronic bronchitis. This association warrants further studies including polysomnographic evaluations because of few epidemiologic studies on the association. Via a polysomnography study, we evaluated the associations of snoring, obstructive sleep apnea, and systemic inflammation with chronic bronchitis among 442 participants from a population-based cohort. Methods At baseline, we assessed participants’ serum levels of C-reactive protein, a biomarker of systemic inflammation. Over a 5-year period, we conducted overnight polysomnography and identified any new cases of chronic bronchitis. Results After taking into account age, smoking, and other potential risk factors, the multivariate odds ratio (95% CI) for chronic bronchitis was 2.9 (95% CI, 1.3–6.4) for snorers with cumulative duration of snoring episodes ≥ 1 hour as compared with those snoring < 1 hour. This association did not change after further adjustment for the presence of apnea. Obstructive sleep apnea had no association with chronic bronchitis. A higher level of serum C-reactive protein was associated with chronic bronchitis (p value for trend < 0.05). In a joint analysis of snoring and C-reactive protein, longer cumulative duration of snoring episodes accompanied by systemic inflammation was associated with a 10-fold (95% CI, 2.9 to 37.4) increase in the multivariate odds of chronic bronchitis. Conclusions This polysomnography study provides additional data supporting the hypothesis that snoring is associated with chronic bronchitis implying that snoring-related local and systemic inflammation may play roles in the development of chronic bronchitis

    NTIRE 2019 Challenge on Video Deblurring: Methods and Results

    No full text
    This paper reviews the first NTIRE challenge on video deblurring (restoration of rich details and high frequency components from blurred video frames) with focus on the proposed solutions and results. A new REalistic and Di- verse Scenes dataset (REDS) was employed. The challenge was divided into 2 tracks. Track 1 employed dynamic mo- tion blurs while Track 2 had additional MPEG video com- pression artifacts. Each competition had 109 and 93 reg- istered participants. Total 13 teams competed in the final testing phase. They gauge the state-of-the-art in video de- blurring problem

    NTIRE 2019 Challenge on Video Super-Resolution: Methods and Results

    No full text
    This paper reviews the first NTIRE challenge on video super-resolution (restoration of rich details in low-resolution video frames) with focus on proposed solutions and results. A new REalistic and Diverse Scenes dataset (REDS) was employed. The challenge was divided into 2 tracks. Track 1 employed standard bicubic downscaling setup while Track 2 had realistic dynamic motion blurs. Each competition had 124 and 104 registered participants. There were total 14 teams in the final testing phase. They gauge the state-of-the-art in video super-resolution
    corecore