274 research outputs found

    Quantifying Human Bias and Knowledge to guide ML models during Training

    Full text link
    This paper discusses a crowdsourcing based method that we designed to quantify the importance of different attributes of a dataset in determining the outcome of a classification problem. This heuristic, provided by humans acts as the initial weight seed for machine learning models and guides the model towards a better optimal during the gradient descent process. Often times when dealing with data, it is not uncommon to deal with skewed datasets, that over represent items of certain classes, while underrepresenting the rest. Skewed datasets may lead to unforeseen issues with models such as learning a biased function or overfitting. Traditional data augmentation techniques in supervised learning include oversampling and training with synthetic data. We introduce an experimental approach to dealing with such unbalanced datasets by including humans in the training process. We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model. We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample. In this paper, we use two rank aggregator methods Kemeny Young and the Markov Chain aggregator to quantify human opinion on importance of features. This work mainly tests the effectiveness of human knowledge on binary classification (Popular vs Not-popular) problems on two ML models: Deep Neural Networks and Support Vector Machines. This approach considers humans as weak learners and relies on aggregation to offset individual biases and domain unfamiliarity

    Pedestrian Detection via Classification on Riemannian Manifolds

    Get PDF
    We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches

    Learning Graphical Models of Multivariate Functional Data with Applications to Neuroimaging

    Get PDF
    This dissertation investigates the functional graphical models that infer the functional connectivity based on neuroimaging data, which is noisy, high dimensional and has limited samples. The dissertation provides two recipes to infer the functional graphical model: 1) a fully Bayesian framework 2) an end-to-end deep model. We first propose a fully Bayesian regularization scheme to estimate functional graphical models. We consider a direct Bayesian analog of the functional graphical lasso proposed by Qiao et al. (2019).. We then propose a regularization strategy via the graphical horseshoe. We compare both Bayesian approaches to the frequentist functional graphical lasso, and compare the Bayesian functional graphical lasso to the functional graphical horseshoe. We applied the proposed methods with electroencephalography (EEG) data and diffusion tensor imaging (DTI) data. We find that the Bayesian methods tend to outperform the standard functional graphical lasso, and that the functional graphical horseshoe performs best overall, a procedure for which there is no direct frequentist analog. Then we consider a deep neural network architecture to estimate functional graphical models, by combining two simple off-the-shelf algorithms: adaptive functional principal components analysis (FPCA) Yao et al., 2021a) and convolutional graph estimator (Belilovsky et al., 2016). We train our proposed model with synthetic data which emulate the real world observations and prior knowledge. Based on synthetic data generation process, our model convert an inference problem as a supervised learning problem. Compared with other framework, our proposed deep model which offers a general recipe to infer the functional graphical model based on data-driven approach, take the raw functional dataset as input and avoid deriving sophisticated closed-form. Through simulation studies, we find that our deep functional graph model trained on synthetic data generalizes well and outperform other popular baselines marginally. In addition, we apply deep functional graphical model in the real world EEG data, and our proposed model discover meaningful brain connectivity. Finally, we are interested in estimating casual graph with functional input. In order to process functional covariates in causal estimation, we leverage the similar strategy as our deep functional graphical model. We extend popular deep causal models to infer causal effects with functional confoundings within the potential outcomes framework. Our method is simple yet effective, where we validate our proposed architecture in variety of simulation settings. Our work offers an alternative way to do causal inference with functional data

    A Support Vector Machine based approach for plagiarism detection in Python code submissions in undergraduate settings

    Get PDF
    Mechanisms for plagiarism detection play a crucial role in maintaining academic integrity, acting both to penalize wrongdoing while also serving as a preemptive deterrent for bad behavior. This manuscript proposes a customized plagiarism detection algorithm tailored to detect source code plagiarism in the Python programming language. Our approach combines textual and syntactic techniques, employing a support vector machine (SVM) to effectively combine various indicators of similarity and calculate the resulting similarity scores. The algorithm was trained and tested using a sample of code submissions of 4 coding problems each from 45 volunteers; 15 of these were original submissions while the other 30 were plagiarized samples. The submissions of two of the questions was used for training and the other two for testing-using the leave-p-out cross-validation strategy to avoid overfitting. We compare the performance of the proposed method with two widely used tools-MOSS and JPlag—and find that the proposed method results in a small but significant improvement in accuracy compared to JPlag, while significantly outperforming MOSS in flagging plagiarized samples

    Generalized vs Specialized activity recognition system for newborn resuscitation videos using Deep Neural Networks.

    Get PDF
    Birth asphyxia is a global problem which has resulted in a high mortality rate of newborn babies all over the globe, it is a newborn’s inability to establish breathing at birth. A notable breakthrough is the marrying of medical technology with information technology in an attempt to tackle this global health problem. An example of this is the Safer Births project which is focused on establishing technological advancement to curb newborn deaths. In the year 2013, the Safer Births project started and has till date gathered a lot of data captured during resuscitation sessions. The Haydom data used for the Safer Births project and additional data from Nepal and SUS will be used with the aim of comparing a specialized and generalized model trained on activity recognition system I3D and RGB stream excluding optical flow. With focus on only the newborn region, the reason for this is to simplify the existing model. The experiment was conducted in view of the possibility of achieving a system that can generalize or specialize with a combination of different hospital data on some specific activities of interest namely Ventilation, Suction, Stimulation. A new simplified pipeline, which is a reduction of the previous work done by the saferbirth group, showed a very poor performance when generalized

    Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables

    Full text link
    In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'. However, a concrete problem to be solved by XAI methods has not yet been formally stated. As a result, XAI methods are lacking theoretical and empirical evidence for the 'correctness' of their explanations, limiting their potential use for quality-control and transparency purposes. At the same time, Haufe et al. (2014) showed, using simple toy examples, that even standard interpretations of linear models can be highly misleading. Specifically, high importance may be attributed to so-called suppressor variables lacking any statistical relation to the prediction target. This behavior has been confirmed empirically for a large array of XAI methods in Wilming et al. (2022). Here, we go one step further by deriving analytical expressions for the behavior of a variety of popular XAI methods on a simple two-dimensional binary classification problem involving Gaussian class-conditional distributions. We show that the majority of the studied approaches will attribute non-zero importance to a non-class-related suppressor feature in the presence of correlated noise. This poses important limitations on the interpretations and conclusions that the outputs of these XAI methods can afford.Comment: Accepted at ICML 202

    Efficient Quantization-aware Training with Adaptive Coreset Selection

    Full text link
    The expanding model size and computation of deep neural networks (DNNs) have increased the demand for efficient model deployment methods. Quantization-aware training (QAT) is a representative model compression method to leverage redundancy in weights and activations. However, most existing QAT methods require end-to-end training on the entire dataset, which suffers from long training time and high energy costs. Coreset selection, aiming to improve data efficiency utilizing the redundancy of training data, has also been widely used for efficient training. In this work, we propose a new angle through the coreset selection to improve the training efficiency of quantization-aware training. Based on the characteristics of QAT, we propose two metrics: error vector score and disagreement score, to quantify the importance of each sample during training. Guided by these two metrics of importance, we proposed a quantization-aware adaptive coreset selection (ACS) method to select the data for the current training epoch. We evaluate our method on various networks (ResNet-18, MobileNetV2), datasets(CIFAR-100, ImageNet-1K), and under different quantization settings. Compared with previous coreset selection methods, our method significantly improves QAT performance with different dataset fractions. Our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of 4.24% compared to the baseline.Comment: Code: https://github.com/HuangOwen/QAT-AC

    Calibrating Knowledge Graphs

    Get PDF
    A knowledge graph model represents a given knowledge graph as a number of vectors. These models are evaluated for several tasks, and one of them is link prediction, which consists of predicting whether new edges are plausible when the model is provided with a partial edge. Calibration is a postprocessing technique that aims to align the predictions of a model with respect to a ground truth. The idea is to make a model more reliable by reducing its confidence for incorrect predictions (overconfidence), and increasing the confidence for correct predictions that are closer to the negative threshold (underconfidence). Calibration for knowledge graph models have been previously studied for the task of triple classification, which is different than link prediction, and assuming closed-world, that is, knowledge that is missing from the graph at hand is incorrect. However, knowledge graphs operate under the open-world assumption such that it is unknown whether missing knowledge is correct or incorrect. In this thesis, we propose open-world calibration of knowledge graph models for link prediction. We rely on strategies to synthetically generate negatives that are expected to have different levels of semantic plausibility. Calibration thus consists of aligning the predictions of the model with these different semantic levels. Nonsensical negatives should be farther away from a positive than semantically plausible negatives. We analyze several scenarios in which calibration based on the sigmoid function can lead to incorrect results when considering distance-based models. We also propose the Jensen-Shannon distance to measure the divergence of the predictions before and after calibration. Our experiments exploit several pre-trained models of nine algorithms over seven datasets. Our results show that many of these pre-trained models are properly calibrated without intervention under the closed-world assumption, but it is not the case for the open-world assumption. Furthermore, Brier scores (the mean squared error before and after calibration) using the closed-world assumption are generally lower and the divergence is higher when using open-world calibration. From these results, we gather that open-world calibration is a harder task than closed-world calibration. Finally, analyzing different measurements related to link prediction accuracy, we propose a combined loss function for calibration that maintains the accuracy of the model

    Prediction of new outlinks for focused Web crawling

    Get PDF
    Discovering new hyperlinks enables Web crawlers to find new pages that have not yet been indexed. This is especially important for focused crawlers because they strive to provide a comprehensive analysis of specific parts of the Web, thus prioritizing discovery of new pages over discovery of changes in content. In the literature, changes in hyperlinks and content have been usually considered simultaneously. However, there is also evidence suggesting that these two types of changes are not necessarily related. Moreover, many studies about predicting changes assume that long history of a page is available, which is unattainable in practice. The aim of this work is to provide a methodology for detecting new links effectively using a short history. To this end, we use a dataset of ten crawls at intervals of one week. Our study consists of three parts. First, we obtain insight in the data by analyzing empirical properties of the number of new outlinks. We observe that these properties are, on average, stable over time, but there is a large difference between emergence of hyperlinks towards pages within and outside the domain of a target page (internal and external outlinks, respectively). Next, we provide statistical models for three targets: the link change rate, the presence of new links, and the number of new links. These models include the features used earlier in the literature, as well as new features introduced in this work. We analyze correlation between the features, and investigate their informativeness. A notable finding is that, if the history of the target page is not available, then our new features, that represent the history of related pages, are most predictive for new links in the target page. Finally, we propose ranking methods as guidelines for focused crawlers to efficiently discover new pages, which achieve excellent performance with respect to the corresponding targets

    Classical perspectives on the Newton--Wigner position observable

    Get PDF
    This paper deals with the Newton--Wigner position observable for Poincaré invariant \emph{classical}systems. We prove an existence and uniqueness theorem for elementary systems that parallels the well-known Newton--Wigner theorem in the quantum context. We also discuss and justify the geometric interpretation of the Newton--Wigner position as `centre of spin', already proposed by Fleming in 1965 again in the quantum context
    • …
    corecore