194 research outputs found

    Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection

    Full text link

    Reversible Watermarking by Modulation and Security Enhancement

    Full text link

    Learning Multi-Boosted HMMs for Lip-Password Based Speaker Verification

    Full text link

    Label-Noise Learning with Intrinsically Long-Tailed Data

    Full text link
    Label noise is one of the key factors that lead to the poor generalization of deep learning models. Existing label-noise learning methods usually assume that the ground-truth classes of the training data are balanced. However, the real-world data is often imbalanced, leading to the inconsistency between observed and intrinsic class distribution with label noises. In this case, it is hard to distinguish clean samples from noisy samples on the intrinsic tail classes with the unknown intrinsic class distribution. In this paper, we propose a learning framework for label-noise learning with intrinsically long-tailed data. Specifically, we propose two-stage bi-dimensional sample selection (TABASCO) to better separate clean samples from noisy samples, especially for the tail classes. TABASCO consists of two new separation metrics that complement each other to compensate for the limitation of using a single metric in sample separation. Extensive experiments on benchmarks demonstrate the effectiveness of our method. Our code is available at https://github.com/Wakings/TABASCO.Comment: Accepted by ICCV 202

    A Batch Rival Penalized Expectation-Maximization Algorithm for Gaussian Mixture Clustering with Automatic Model Selection

    Get PDF
    Within the learning framework of maximum weighted likelihood (MWL) proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM) algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign the learning rate analogous to the Expectation-Maximization (EM) algorithm (Dempster et al., 1977), but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments show the superior performance of the proposed algorithm on the synthetic data and color image segmentation

    An efficient and scalable algorithm for clustering XML documents by structure

    Full text link

    Modeling Spatial Relations of Human Body Parts for Indexing and Retrieving Close Character Interactions

    Get PDF
    Retrieving pre-captured human motion for analyzing and synthesizing virtual character movement have been widely used in Virtual Reality (VR) and interactive computer graphics applications. In this paper, we propose a new human pose representation, called Spatial Relations of Human Body Parts (SRBP), to represent spatial relations between body parts of the subject(s), which intuitively describes how much the body parts are interacting with each other. Since SRBP is computed from the local structure (i.e. multiple body parts in proximity) of the pose instead of the information from individual or pairwise joints as in previous approaches, the new representation is robust to minor variations of individual joint location. Experimental results show that SRBP outperforms the existing skeleton-based motion retrieval and classification approaches on benchmark databases

    Enhancing the Performance of Neural Networks Through Causal Discovery and Integration of Domain Knowledge

    Full text link
    In this paper, we develop a generic methodology to encode hierarchical causality structure among observed variables into a neural network in order to improve its predictive performance. The proposed methodology, called causality-informed neural network (CINN), leverages three coherent steps to systematically map the structural causal knowledge into the layer-to-layer design of neural network while strictly preserving the orientation of every causal relationship. In the first step, CINN discovers causal relationships from observational data via directed acyclic graph (DAG) learning, where causal discovery is recast as a continuous optimization problem to avoid the combinatorial nature. In the second step, the discovered hierarchical causality structure among observed variables is systematically encoded into neural network through a dedicated architecture and customized loss function. By categorizing variables in the causal DAG as root, intermediate, and leaf nodes, the hierarchical causal DAG is translated into CINN with a one-to-one correspondence between nodes in the causal DAG and units in the CINN while maintaining the relative order among these nodes. Regarding the loss function, both intermediate and leaf nodes in the DAG graph are treated as target outputs during CINN training so as to drive co-learning of causal relationships among different types of nodes. As multiple loss components emerge in CINN, we leverage the projection of conflicting gradients to mitigate gradient interference among the multiple learning tasks. Computational experiments across a broad spectrum of UCI data sets demonstrate substantial advantages of CINN in predictive performance over other state-of-the-art methods. In addition, an ablation study underscores the value of integrating structural and quantitative causal knowledge in enhancing the neural network's predictive performance incrementally
    corecore