26 research outputs found

    Manifold regularization for structured outputs via the joint kernel

    Full text link
    By utilizing the label dependencies among both the labeled and unlabeled data, semi-supervised learning often has better generalization performance than supervised learning. In this paper, we extend a popular graph-based semi-supervised learning method, namely, manifold regularization, to structured outputs. This is performed via the joint kernel directly and allows a unified manifold regularization framework for both unstructured and structured data. Experimental results on various data sets with inter-dependent outputs demonstrate the usefulness of manifold information in improving prediction performance

    Illumination Controllable Dehazing Network based on Unsupervised Retinex Embedding

    Full text link
    On the one hand, the dehazing task is an illposedness problem, which means that no unique solution exists. On the other hand, the dehazing task should take into account the subjective factor, which is to give the user selectable dehazed images rather than a single result. Therefore, this paper proposes a multi-output dehazing network by introducing illumination controllable ability, called IC-Dehazing. The proposed IC-Dehazing can change the illumination intensity by adjusting the factor of the illumination controllable module, which is realized based on the interpretable Retinex theory. Moreover, the backbone dehazing network of IC-Dehazing consists of a Transformer with double decoders for high-quality image restoration. Further, the prior-based loss function and unsupervised training strategy enable IC-Dehazing to complete the parameter learning process without the need for paired data. To demonstrate the effectiveness of the proposed IC-Dehazing, quantitative and qualitative experiments are conducted on image dehazing, semantic segmentation, and object detection tasks. Code is available at https://github.com/Xiaofeng-life/ICDehazing

    No Place to Hide: Dual Deep Interaction Channel Network for Fake News Detection based on Data Augmentation

    Full text link
    Online Social Network (OSN) has become a hotbed of fake news due to the low cost of information dissemination. Although the existing methods have made many attempts in news content and propagation structure, the detection of fake news is still facing two challenges: one is how to mine the unique key features and evolution patterns, and the other is how to tackle the problem of small samples to build the high-performance model. Different from popular methods which take full advantage of the propagation topology structure, in this paper, we propose a novel framework for fake news detection from perspectives of semantic, emotion and data enhancement, which excavates the emotional evolution patterns of news participants during the propagation process, and a dual deep interaction channel network of semantic and emotion is designed to obtain a more comprehensive and fine-grained news representation with the consideration of comments. Meanwhile, the framework introduces a data enhancement module to obtain more labeled data with high quality based on confidence which further improves the performance of the classification model. Experiments show that the proposed approach outperforms the state-of-the-art methods

    Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning

    Full text link
    Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed. Contrastive learning exploits instance-level information to learn robust features. However, the learned information is probably confined to different views of the same instance. In this paper, we attempt to leverage the similarity between two distinct images to boost representation in self-supervised learning. In contrast to instance-level information, the similarity between two distinct images may provide more useful information. Besides, we analyze the relation between similarity loss and feature-level cross-entropy loss. These two losses are essential for most deep learning methods. However, the relation between these two losses is not clear. Similarity loss helps obtain instance-level representation, while feature-level cross-entropy loss helps mine the similarity between two distinct images. We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results. Code is available at https://github.com/guijiejie/ICCL.Comment: This paper is accepted by IEEE Transactions on Image Processin

    Moderating the Outputs of Support Vector Machine Classifiers

    No full text
    In this paper, we extend the use of moderated outputs to the support vector machine (SVM) by making use of a relationship between SVM and the evidence framework. The moderated output is more in line with the Bayesian idea that the posterior weight distribution should be taken into account upon prediction, and it also alleviates the usual tendency of assigning overly high con- dence to the estimated class memberships of the test patterns. Moreover, the moderated output derived here can be taken as an approximation to the posterior class probability. Hence, meaningful rejection thresholds can be assigned and outputs from several networks can be directly compared. Experimental results on both articial and real-world data are also discussed. 1 Introduction In recent years, there has been a lot of interest in studying the support vector machine (SVM) [8, 9]. To date, SVM has been applied successfully to a wide range of problems, such as classication, regression, time series prediction a..

    Automated Text Categorization Using Support Vector Machine

    No full text
    In this paper, we study the use of support vector machine in text categorization. Unlike other machine learning techniques, it allows easy incorporation of new documents into an existing trained system. Moreover, dimension reduction, which is usually imperative, now becomes optional. Thus, SVM adapts efficiently in dynamic environments that require frequent additions to the document collection. Empirical results on the Reuters-22173 collection are also discussed. 1. Introduction The increasingly widespread use of information services made possible by the Internet and World Wide Web (WWW) has led to the so-called information overloading problem. Today, millions of online documents on every topic are easily accessible via the Internet. As the available information increases, the inability of people to assimilate and profitably utilize such large amounts of information becomes more and more apparent. Developing user-friendly, automatic tools for efficient as well as effective retrieval ..

    Integrating the Evidence Framework and the Support Vector Machine

    No full text
    . In this paper, we show that training of the support vector machine (SVM) can be interpreted as performing the level 1 inference of MacKay's evidence framework. We further on show that levels 2 and 3 can also be applied to SVM. This allows automatic adjustment of the regularization parameter and the kernel parameter. More importantly, it opens up a wealth of Bayesian tools for use with SVM. Performance is evaluated on both synthetic and real-world data sets. 1. Introduction Recently, there has been a lot of interest in studying the support vector machine (SVM) [1, 4, 5]. SVM is based on the idea of structural risk minimization, which shows that the generalization error is bounded by the sum of the training set error and a term depending on the Vapnik-Chervonenkis dimension of the learner. By minimizing this bound, high generalization performance can be achieved. Moreover, unlike other machine learning methods, SVM's generalization error is not related to the problem's input dimension..

    Simple randomized algorithms for online learning with kernels

    No full text
    In online learning with kernels, it is vital to control the size (budget) of the support set because of the curse of kernelization. In this paper, we propose two simple and effective stochastic strategies for controlling the budget. Both algorithms have an expected regret that is sublinear in the horizon. Experimental results on a number of benchmark data sets demonstrate encouraging performance in terms of both efficacy and efficiency. © 2014 Elsevier Ltd

    Accurate integration of aerosol predictions by smoothing on a manifold

    No full text
    Accurately measuring the aerosol optical depth (AOD) is essential for our understanding of the climate. Currently, AOD can be measured by (i) satellite instruments, which operate on a global scale but have limited accuracies; and (ii) ground-based instruments, which are more accurate but not widely available. Recent approaches focus on integrating measurements from these two sources to complement each other. In this paper, we further improve the prediction accuracy by using the observation that the AOD varies slowly in the spatial domain. Using a probabilistic approach, we im-pose this smoothness constraint by a Gaussian random field on the Earth's surface, which can be considered as a two-dimensional manifold. The proposed integration approach is computationally simple, and experimental results on both synthetic and real-world data sets show that it significantly outperforms the state-of-the-art
    corecore