2 research outputs found

    Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior

    Full text link
    Dark Channel Prior (DCP) is a widely recognized traditional dehazing algorithm. However, it may fail in bright region and the brightness of the restored image is darker than hazy image. In this paper, we propose an effective method to optimize DCP. We build a multiple linear regression haze-removal model based on DCP atmospheric scattering model and train this model with RESIDE dataset, which aims to reduce the unexpected errors caused by the rough estimations of transmission map t(x) and atmospheric light A. The RESIDE dataset provides enough synthetic hazy images and their corresponding groundtruth images to train and test. We compare the performances of different dehazing algorithms in terms of two important full-reference metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM). The experiment results show that our model gets highest SSIM value and its PSNR value is also higher than most of state-of-the-art dehazing algorithms. Our results also overcome the weakness of DCP on real-world hazy imagesComment: IEEE CPS (CSCI 2018 Int'l Conference

    English Out-of-Vocabulary Lexical Evaluation Task

    Full text link
    Unlike previous unknown nouns tagging task, this is the first attempt to focus on out-of-vocabulary (OOV) lexical evaluation tasks that do not require any prior knowledge. The OOV words are words that only appear in test samples. The goal of tasks is to provide solutions for OOV lexical classification and prediction. The tasks require annotators to conclude the attributes of the OOV words based on their related contexts. Then, we utilize unsupervised word embedding methods such as Word2Vec and Word2GM to perform the baseline experiments on the categorical classification task and OOV words attribute prediction tasks
    corecore