1,123 research outputs found

    BSDEs driven by a multi-dimensional martingale and their applications to market models with funding costs

    Full text link
    We establish some well-posedness and comparison results for BSDEs driven by one- and multi-dimensional martingales. On the one hand, our approach is largely motivated by results and methods developed in Carbone et al. (2008) and El Karoui and Huang (1997). On the other hand, our results are also motivated by the recent developments in arbitrage pricing theory under funding costs and collateralization. A new version of the comparison theorem for BSDEs driven by a multi-dimensional martingale is established and applied to the pricing and hedging BSDEs studied in Bielecki and Rutkowski (2014) and Nie and Rutkowski (2014). This allows us to obtain the existence and uniqueness results for unilateral prices and to demonstrate the existence of no-arbitrage bounds for a collateralized contract when both agents have non-negative initial endowments

    Dilated Deep Residual Network for Image Denoising

    Full text link
    Variations of deep neural networks such as convolutional neural network (CNN) have been successfully applied to image denoising. The goal is to automatically learn a mapping from a noisy image to a clean image given training data consisting of pairs of noisy and clean images. Most existing CNN models for image denoising have many layers. In such cases, the models involve a large amount of parameters and are computationally expensive to train. In this paper, we develop a dilated residual CNN for Gaussian image denoising. Compared with the recently proposed residual denoiser, our method can achieve comparable performance with less computational cost. Specifically, we enlarge receptive field by adopting dilated convolution in residual network, and the dilation factor is set to a certain value. We utilize appropriate zero padding to make the dimension of the output the same as the input. It has been proven that the expansion of receptive field can boost the CNN performance in image classification, and we further demonstrate that it can also lead to competitive performance for denoising problem. Moreover, we present a formula to calculate receptive field size when dilated convolution is incorporated. Thus, the change of receptive field can be interpreted mathematically. To validate the efficacy of our approach, we conduct extensive experiments for both gray and color image denoising with specific or randomized noise levels. Both of the quantitative measurements and the visual results of denoising are promising comparing with state-of-the-art baselines.Comment: camera ready, 8 pages, accepted to IEEE ICTAI 201

    Instance-based Deep Transfer Learning

    Full text link
    Deep transfer learning recently has acquired significant research interest. It makes use of pre-trained models that are learned from a source domain, and utilizes these models for the tasks in a target domain. Model-based deep transfer learning is probably the most frequently used method. However, very little research work has been devoted to enhancing deep transfer learning by focusing on the influence of data. In this paper, we propose an instance-based approach to improve deep transfer learning in a target domain. Specifically, we choose a pre-trained model from a source domain and apply this model to estimate the influence of training samples in a target domain. Then we optimize the training data of the target domain by removing the training samples that will lower the performance of the pre-trained model. We later either fine-tune the pre-trained model with the optimized training data in the target domain, or build a new model which is initialized partially based on the pre-trained model, and fine-tune it with the optimized training data in the target domain. Using this approach, transfer learning can help deep learning models to capture more useful features. Extensive experiments demonstrate the effectiveness of our approach on boosting the quality of deep learning models for some common computer vision tasks, such as image classification.Comment: Accepted to WACV 2019. This is a preprint versio
    • …
    corecore