63 research outputs found

    Learning Class Regularized Features for Action Recognition

    Full text link
    Training Deep Convolutional Neural Networks (CNNs) is based on the notion of using multiple kernels and non-linearities in their subsequent activations to extract useful features. The kernels are used as general feature extractors without specific correspondence to the target class. As a result, the extracted features do not correspond to specific classes. Subtle differences between similar classes are modeled in the same way as large differences between dissimilar classes. To overcome the class-agnostic use of kernels in CNNs, we introduce a novel method named Class Regularization that performs class-based regularization of layer activations. We demonstrate that this not only improves feature search during training, but also allows an explicit assignment of features per class during each stage of the feature extraction process. We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively

    Interference Phenomenon for the Faddeevian Regularization of 2D Chiral Fermionic Determinants

    Full text link
    The classification of the regularization ambiguity of 2D fermionic determinant in three different classes according to the number of second-class constraints, including the new faddeevian regularization, is examined and extended. We found a new and important result that the faddeevian class, with three second-class constraints, possess a free continuous one parameter family of elements. The criterion of unitarity restricts the parameter to the same range found earlier by Jackiw and Rajaraman for the two-constraints class. We studied the restriction imposed by the interference of right-left modes of the chiral Schwinger model (χQED2\chi QED_{2}) using Stone's soldering formalism. The interference effects between right and left movers, producing the massive vectorial photon, are shown to constrain the regularization parameter to belong to the four-constraints class which is the only non-ambiguous class with a unique regularization parameter.Comment: 15 pages, Revtex. Final version to be published in Phys. Rev.

    Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets?

    Full text link
    Supervised classification methods often assume the train and test data distributions are the same and that all classes in the test set are present in the training set. However, deployed classifiers often require the ability to recognize inputs from outside the training set as unknowns. This problem has been studied under multiple paradigms including out-of-distribution detection and open set recognition. For convolutional neural networks, there have been two major approaches: 1) inference methods to separate knowns from unknowns and 2) feature space regularization strategies to improve model robustness to outlier inputs. There has been little effort to explore the relationship between the two approaches and directly compare performance on anything other than small-scale datasets that have at most 100 categories. Using ImageNet-1K and Places-434, we identify novel combinations of regularization and specialized inference methods that perform best across multiple outlier detection problems of increasing difficulty level. We found that input perturbation and temperature scaling yield the best performance on large scale datasets regardless of the feature space regularization strategy. Improving the feature space by regularizing against a background class can be helpful if an appropriate background class can be found, but this is impractical for large scale image classification datasets

    Adaptive multi-class Bayesian sparse regression - An application to brain activity classification

    Get PDF
    International audienceIn this article we describe a novel method for regularized regression and apply it to the prediction of a behavioural variable from brain activation images. In the context of neuroimaging, regression or classification techniques are often plagued with the curse of dimensionality, due to the extremely high number of voxels and the limited number of activation maps. A commonly-used solution is the regularization of the weights used in the parametric prediction function. It entails the difficult issue of introducing an adapted amount of regularization in the model; this question can be addressed in a Bayesian framework, but model specification needs a careful design to balance adaptiveness and sparsity. Thus, we introduce an adaptive multi-class regularization to deal with this cluster-based structure of the data. Based on a hierarchical model and estimated in a Variational Bayes framework, our algorithm is robust to overfit and more adaptive than other regularization methods. Results on simulated data and preliminary results on real data show the accuracy of the method in the context of brain activation images

    Deep Learning-Based Action Recognition

    Get PDF
    The classification of human action or behavior patterns is very important for analyzing situations in the field and maintaining social safety. This book focuses on recent research findings on recognizing human action patterns. Technology for the recognition of human action pattern includes the processing technology of human behavior data for learning, technology of expressing feature values ​​of images, technology of extracting spatiotemporal information of images, technology of recognizing human posture, and technology of gesture recognition. Research on these technologies has recently been conducted using general deep learning network modeling of artificial intelligence technology, and excellent research results have been included in this edition

    Multi-TGDR, a multi-class regularization method, identifies the metabolic profiles of hepatocellular carcinoma and cirrhosis infected with hepatitis B or hepatitis C virus

    Get PDF
    BACKGROUND: Over the last decade, metabolomics has evolved into a mainstream enterprise utilized by many laboratories globally. Like other “omics” data, metabolomics data has the characteristics of a smaller sample size compared to the number of features evaluated. Thus the selection of an optimal subset of features with a supervised classifier is imperative. We extended an existing feature selection algorithm, threshold gradient descent regularization (TGDR), to handle multi-class classification of “omics” data, and proposed two such extensions referred to as multi-TGDR. Both multi-TGDR frameworks were used to analyze a metabolomics dataset that compares the metabolic profiles of hepatocellular carcinoma (HCC) infected with hepatitis B (HBV) or C virus (HCV) with that of cirrhosis induced by HBV/HCV infection; the goal was to improve early-stage diagnosis of HCC. RESULTS: We applied two multi-TGDR frameworks to the HCC metabolomics data that determined TGDR thresholds either globally across classes, or locally for each class. Multi-TGDR global model selected 45 metabolites with a 0% misclassification rate (the error rate on the training data) and had a 3.82% 5-fold cross-validation (CV-5) predictive error rate. Multi-TGDR local selected 48 metabolites with a 0% misclassification rate and a 5.34% CV-5 error rate. CONCLUSIONS: One important advantage of multi-TGDR local is that it allows inference for determining which feature is related specifically to the class/classes. Thus, we recommend multi-TGDR local be used because it has similar predictive performance and requires the same computing time as multi-TGDR global, but may provide class-specific inference
    • …
    corecore