339 research outputs found

    A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing

    Full text link
    The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem [M. Mignotte. An energy-based model for the image edge-histogram specification problem. IEEE Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently

    Towards Robust Blind Face Restoration with Codebook Lookup Transformer

    Full text link
    Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance to 1) improve the mapping from degraded inputs to desired outputs, or 2) complement high-quality details lost in the inputs. In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting blind face restoration as a code prediction task, while providing rich visual atoms for generating high-quality faces. Under this paradigm, we propose a Transformer-based prediction network, named CodeFormer, to model the global composition and context of the low-quality faces for code prediction, enabling the discovery of natural faces that closely approximate the target faces even when the inputs are severely degraded. To enhance the adaptiveness for different degradation, we also propose a controllable feature transformation module that allows a flexible trade-off between fidelity and quality. Thanks to the expressive codebook prior and global modeling, CodeFormer outperforms the state of the arts in both quality and fidelity, showing superior robustness to degradation. Extensive experimental results on synthetic and real-world datasets verify the effectiveness of our method.Comment: Accepted by NeurIPS 2022. Code: https://github.com/sczhou/CodeForme

    Understanding Deformable Alignment in Video Super-Resolution

    Full text link
    Deformable convolution, originally proposed for the adaptation to geometric variations of objects, has recently shown compelling performance in aligning multiple frames and is increasingly adopted for video super-resolution. Despite its remarkable performance, its underlying mechanism for alignment remains unclear. In this study, we carefully investigate the relation between deformable alignment and the classic flow-based alignment. We show that deformable convolution can be decomposed into a combination of spatial warping and convolution. This decomposition reveals the commonality of deformable alignment and flow-based alignment in formulation, but with a key difference in their offset diversity. We further demonstrate through experiments that the increased diversity in deformable alignment yields better-aligned features, and hence significantly improves the quality of video super-resolution output. Based on our observations, we propose an offset-fidelity loss that guides the offset learning with optical flow. Experiments show that our loss successfully avoids the overflow of offsets and alleviates the instability problem of deformable alignment. Aside from the contributions to deformable alignment, our formulation inspires a more flexible approach to introduce offset diversity to flow-based alignment, improving its performance.Comment: Tech report, 15 pages, 19 figure

    Dual Associated Encoder for Face Restoration

    Full text link
    Restoring facial details from low-quality (LQ) images has remained a challenging problem due to its ill-posedness induced by various degradations in the wild. The existing codebook prior mitigates the ill-posedness by leveraging an autoencoder and learned codebook of high-quality (HQ) features, achieving remarkable quality. However, existing approaches in this paradigm frequently depend on a single encoder pre-trained on HQ data for restoring HQ images, disregarding the domain gap between LQ and HQ images. As a result, the encoding of LQ inputs may be insufficient, resulting in suboptimal performance. To tackle this problem, we propose a novel dual-branch framework named DAEFR. Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs. Additionally, we incorporate association training to promote effective synergy between the two branches, enhancing code prediction and output quality. We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets, demonstrating its superior performance in restoring facial details.Comment: Technical Repor

    Impact of smoking on health system costs among cancer patients in a retrospective cohort study in Ontario, Canada

    Get PDF
    Objective Smoking is the main modifiable cancer risk factor. The objective of this study was to examine the impact of smoking on health system costs among newly diagnosed adult patients with cancer. Specifically, costs of patients with cancer who were current smokers were compared with those of non-smokers from a publicly funded health system perspective. Methods This population-based cohort study of patients with cancer used administrative databases to identify smokers and non-smokers (1 April 2014-31 March 2016) and their healthcare costs in the 12-24 months following a cancer diagnosis. The health services included were hospitalisations, emergency room visits, drugs, home care services and physician services (from the time of diagnosis onwards). The difference in cost (ie, incremental cost) between patients with cancer who were smokers and those who were non-smokers was estimated using a generalised linear model (with log link and gamma distribution), and adjusted for age, sex, neighbourhood income, rurality, cancer site, cancer stage, geographical region and comorbidities. Results This study identified 3606 smokers and 14 911 non-smokers. Smokers were significantly younger (61 vs 65 years), more likely to be male (53%), lived in poorer neighbourhoods, had more advanced cancer stage,and were more likely to die within 1 year of diagnosis, compared with non-smokers. The regression model revealed that, on average, smokers had significantly higher monthly healthcare costs (5091)thannon−smokers(5091) than non-smokers (4847), p<0.05. Conclusions Smoking status has a significant impact on healthcare costs among patients with cancer. On average, smokers incurred higher healthcare costs than non-smokers. These findings provide a further rationale for efforts to introduce evidence-based smoking cessation programmes as a standard of care for patients with cancer as they have the potential not only to improve patients' outcomes but also to reduce the economic burden of smoking on the healthcare system
    • …
    corecore