41 research outputs found

    Online Bivariate Outlier Detection in Final Test Using Kernel Density Estimation

    Get PDF
    In parametric IC testing, outlier detection is applied to filter out potential unreliable devices. Most outlier detection methods are used in an offline setting and hence are not applicable to Final Test, where immediate pass/fail decisions are required. Therefore, we developed a new bivariate online outlier detection method that is applicable to Final Test without making assumptions about a specific form of relations between two test parameters. An acceptance region is constructed using kernel density estimation. We use a grid discretization in order to enable a fast outlier decision. After each accepted device the grid is updated, hence the method is able to adapt to shifting measurements

    A Local Density-Based Approach for Local Outlier Detection

    Full text link
    This paper presents a simple but effective density-based outlier detection approach with the local kernel density estimation (KDE). A Relative Density-based Outlier Score (RDOS) is introduced to measure the local outlierness of objects, in which the density distribution at the location of an object is estimated with a local KDE method based on extended nearest neighbors of the object. Instead of using only kk nearest neighbors, we further consider reverse nearest neighbors and shared nearest neighbors of an object for density distribution estimation. Some theoretical properties of the proposed RDOS including its expected value and false alarm probability are derived. A comprehensive experimental study on both synthetic and real-life data sets demonstrates that our approach is more effective than state-of-the-art outlier detection methods.Comment: 22 pages, 14 figures, submitted to Pattern Recognition Letter

    Online action recognition based on skeleton motion distribution

    Get PDF

    Towards Scalable and Unified Example-based Explanation and Outlier Detection

    Full text link
    When neural networks are employed for high-stakes decision making, it is desirable for the neural networks to provide explanation for their prediction in order for us to understand the features that have contributed to the decision. At the same time, it is important to flag potential outliers for in-depth verification by domain experts. In this work we propose to unify two differing aspects of explainability with outlier detection. We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for its prediction and at the same time identify regions of similarity between the predicted sample and the examples. The examples are real prototypical cases sampled from the training set via our novel iterative prototype replacement algorithm. Furthermore, we propose to use the prototype similarity scores for identifying outliers. We compare performances in terms of classification, explanation quality, and outlier detection of our proposed network with other baselines. We show that our prototype-based networks beyond similarity kernels deliver meaningful explanation and promising outlier detection results without compromising classification accuracy

    A Dense Network Model for Outlier Prediction Using Learning Approaches

    Get PDF
    There are various sub-categories in outlier prediction and the investigators show less attention to related domains like outliers in audio recognition, video recognition, music recognition, etc. However, this research is specific to medical data analysis. It specifically concentrates on predicting the outliers from the medical database. Here, feature mapping and representation are achieved by adopting stacked LSTM-based CNN. The extracted features are fed as an input to the Linear Support Vector Machine () is used for classification purposes. Based on the analysis, it is known that there is a strong correlation between the features related to an individual's emotions. It can be analyzed in both a static and dynamic manner. Adopting both learning approaches is done to boost the drawbacks of one another. The statistical analysis is done with MATLAB 2016a environment where metrics like ROC, MCC, AUC, correlation co-efficiency, and prediction accuracy are evaluated and compared to existing approaches like standard CNN, standard SVM, logistic regression, multi-layer perceptrons, and so on. The anticipated learning model shows superior outcomes, and more concentration is provided to select an emotion recognition dataset connected with all the sub-domains

    Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation for Anomaly Detection

    Full text link
    Anomaly detection (AD), aiming to find samples that deviate from the training distribution, is essential in safety-critical applications. Though recent self-supervised learning based attempts achieve promising results by creating virtual outliers, their training objectives are less faithful to AD which requires a concentrated inlier distribution as well as a dispersive outlier distribution. In this paper, we propose Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation (UniCon-HA), taking into account both the requirements above. Specifically, we explicitly encourage the concentration of inliers and the dispersion of virtual outliers via supervised and unsupervised contrastive losses, respectively. Considering that standard contrastive data augmentation for generating positive views may induce outliers, we additionally introduce a soft mechanism to re-weight each augmented inlier according to its deviation from the inlier distribution, to ensure a purified concentration. Moreover, to prompt a higher concentration, inspired by curriculum learning, we adopt an easy-to-hard hierarchical augmentation strategy and perform contrastive aggregation at different depths of the network based on the strengths of data augmentation. Our method is evaluated under three AD settings including unlabeled one-class, unlabeled multi-class, and labeled multi-class, demonstrating its consistent superiority over other competitors.Comment: Accepted by ICCV'202
    corecore