55,928 research outputs found

    Fast multi-output relevance vector regression

    Get PDF
    This paper has applied the matrix Gaussian distribution of the likelihood function of the complete data set to reduce time complexity of multi-output relevance vector regression from O(VM^3) to O(V^3 +M^3), where V and M are the number of output dimensions and basis functions respectively and V < M. Our experimental results demonstrate that the proposed method is more competitive and faster than the existing methods like Thayananthan et al. (2008). Its computational efficiency and accuracy can be attributed to the different model specifications of the likelihood of the data, as the existing method expresses the likelihood of the training data as the product of Gaussian distributions whereas the proposed method expresses it as the matrix Gaussian distribution

    High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso

    Full text link
    The goal of supervised feature selection is to find a subset of input features that are responsible for predicting output values. The least absolute shrinkage and selection operator (Lasso) allows computationally efficient feature selection based on linear dependency between input features and output values. In this paper, we consider a feature-wise kernelized Lasso for capturing non-linear input-output dependency. We first show that, with particular choices of kernel functions, non-redundant features with strong statistical dependence on output values can be found in terms of kernel-based independence measures. We then show that the globally optimal solution can be efficiently computed; this makes the approach scalable to high-dimensional problems. The effectiveness of the proposed method is demonstrated through feature selection experiments with thousands of features.Comment: 18 page

    Multi-score Learning for Affect Recognition: the Case of Body Postures

    Get PDF
    An important challenge in building automatic affective state recognition systems is establishing the ground truth. When the groundtruth is not available, observers are often used to label training and testing sets. Unfortunately, inter-rater reliability between observers tends to vary from fair to moderate when dealing with naturalistic expressions. Nevertheless, the most common approach used is to label each expression with the most frequent label assigned by the observers to that expression. In this paper, we propose a general pattern recognition framework that takes into account the variability between observers for automatic affect recognition. This leads to what we term a multi-score learning problem in which a single expression is associated with multiple values representing the scores of each available emotion label. We also propose several performance measurements and pattern recognition methods for this framework, and report the experimental results obtained when testing and comparing these methods on two affective posture datasets
    corecore