9 research outputs found

    Deep Context-Aware Novelty Detection

    Get PDF
    A common assumption of novelty detection is that the distribution of both "normal" and "novel" data are static. This, however, is often not the case - for example scenarios where data evolves over time or scenarios in which the definition of normal and novel depends on contextual information, both leading to changes in these distributions. This can lead to significant difficulties when attempting to train a model on datasets where the distribution of normal data in one scenario is similar to that of novel data in another scenario. In this paper we propose a context-aware approach to novelty detection for deep autoencoders to address these difficulties. We create a semi-supervised network architecture that utilises auxiliary labels to reveal contextual information and allow the model to adapt to a variety of contexts in which the definitions of normal and novel change. We evaluate our approach on both image data and real world audio data displaying these characteristics and show that the performance of individually trained models can be achieved in a single model

    What's left can't be right -- The remaining positional incompetence of contrastive vision-language models

    Full text link
    Contrastive vision-language models like CLIP have been found to lack spatial understanding capabilities. In this paper we discuss the possible causes of this phenomenon by analysing both datasets and embedding space. By focusing on simple left-right positional relations, we show that this behaviour is entirely predictable, even with large-scale datasets, demonstrate that these relations can be taught using synthetic data and show that this approach can generalise well to natural images - improving the performance on left-right relations on Visual Genome Relations

    Towards the extraction of robust sign embeddings for low resource sign language recognition

    Full text link
    Isolated Sign Language Recognition (SLR) has mostly been applied on datasets containing signs executed slowly and clearly by a limited group of signers. In real-world scenarios, however, we are met with challenging visual conditions, coarticulated signing, small datasets, and the need for signer independent models. To tackle this difficult problem, we require a robust feature extractor to process the sign language videos. One could expect human pose estimators to be ideal candidates. However, due to a domain mismatch with their training sets and challenging poses in sign language, they lack robustness on sign language data and image-based models often still outperform keypoint-based models. Furthermore, whereas the common practice of transfer learning with image-based models yields even higher accuracy, keypoint-based models are typically trained from scratch on every SLR dataset. These factors limit their usefulness for SLR. From the existing literature, it is also not clear which, if any, pose estimator performs best for SLR. We compare the three most popular pose estimators for SLR: OpenPose, MMPose and MediaPipe. We show that through keypoint normalization, missing keypoint imputation, and learning a pose embedding, we can obtain significantly better results and enable transfer learning. We show that keypoint-based embeddings contain cross-lingual features: they can transfer between sign languages and achieve competitive performance even when fine-tuning only the classifier layer of an SLR model on a target sign language. We furthermore achieve better performance using fine-tuned transferred embeddings than models trained only on the target sign language. The embeddings can also be learned in a multilingual fashion. The application of these embeddings could prove particularly useful for low resource sign languages in the future

    Anomaly Detection in Raw Audio Using Deep Autoregressive Networks

    Get PDF
    The 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, Brighton, United Kingdom, 12-17 May 2019Anomaly detection involves the recognition of patterns outside of what is considered normal, given a certain set of input data. This presents a unique set of challenges for machine learning, particularly if we assume a semi-supervised scenario in which anomalous patterns are unavailable at training time meaning algorithms must rely on non-anomalous data alone. Anomaly detection in time series adds an additional level of complexity given the contextual nature of anomalies. For time series modelling, autoregressive deep learning architectures such as WaveNet have proven to be powerful generative models, specifically in the field of speech synthesis. In this paper, we propose to extend the use of this type of architecture to anomaly detection in raw audio. In experiments using multiple audio datasets we compare the performance of this approach to a baseline autoencoder model and show superior performance in almost all cases.Science Foundation IrelandInsight Research Centr

    Overlapping community finding with noisy pairwise constraints

    Get PDF
    In many real applications of semi-supervised learning, the guidance provided by a human oracle might be “noisy” or inaccurate. Human annotators will often be imperfect, in the sense that they can make subjective decisions, they might only have partial knowledge of the task at hand, or they may simply complete a labeling task incorrectly due to the burden of annotation. Similarly, in the context of semi-supervised community finding in complex networks, information encoded as pairwise constraints may be unreliable or conflicting due to the human element in the annotation process. This study aims to address the challenge of handling noisy pairwise constraints in overlapping semi-supervised community detection, by framing the task as an outlier detection problem. We propose a general architecture which includes a process to “clean” or filter noisy constraints. Furthermore, we introduce multiple designs for the cleaning process which use different type of outlier detection models, including autoencoders. A comprehensive evaluation is conducted for each proposed methodology, which demonstrates the potential of the proposed architecture for reducing the impact of noisy supervision in the context of overlapping community detection.Science Foundation IrelandInsight Research CentreThe Ministry of Higher Education in Saudi Arabia2021-02-11 JG: resubmitted due to broken PD

    Personalized, Health-Aware Recipe Recommendation: An Ensemble Topic Modeling Based Approach

    No full text
    The 4th International Workshop on Health Recommender Systems (HealthRecSys 2019), Copenhagen, Denmark, 20 September 2019Food choices are personal and complex and have a significant impact on our long-term health and quality of life. By helping users to make informed and satisfying decisions, Recommender Systems (RS) have the potential to support users in making healthier food choices. Intelligent users-modeling is a key challenge in achieving this potential. This paper investigates Ensemble Topic Modelling (EnsTM) based Feature Identification techniques for efficient user-modeling and recipe recommendation. It builds on findings in EnsTM to propose a reduced data representation format and a smart user-modeling strategy that makes capturing user-preference fast, efficient and interactive. This approach enables personalization, even in a cold-start scenario. We compared three EnsTM based variations through a user study with 48 participants, using a large-scale, real-world corpus of 230,876 recipes, and compare against a conventional Content Based (CB) approach. EnsTM based recommenders performed significantly better than the CB approach. Besides acknowledging multi-domain contents such as taste, demographics and cost, our proposed approach also considers user’s nutritional preference and assists them finding recipes under diverse nutritional categories. Furthermore, it provides excellent coverage and enables implicit understanding of user’s food practices. Subsequent analysis also exposed correlation between certain features and healthier lifestyle.Science Foundation IrelandInsight Research Centr

    Handling Noisy Constraints in Semi-supervised Overlapping Community Finding

    No full text
    The 8th International Conference on Complex Networks and their Applications (Complex Networks 2019), Lisbon, Portugal, 10-12 December 2019Community structure is an essential property that helps us to understand the nature of complex networks. Since algorithms for detecting communities are unsupervised in nature, they can fail to uncover useful groupings, particularly when the underlying communities in a network are highly overlapping [1]. Recent work has sought to address this via semi-supervised learning [2], using a human annotator or “oracle” to provide limited supervision. This knowledge is typically encoded in the form of must-link and cannot-link constraints, which indicate that a pair of nodes should always be or should never be assigned to the same community. In this way, we can uncover communities which are otherwise difficult to identify via unsupervised techniques. However, in real semi-supervised learning applications, human supervision may be unreliable or “noisy”, relying on subjective decision making [3]. Annotators can disagree with one another, they might only have limited knowledge of a domain, or they might simply complete a labeling task incorrectly due to the burden of annotation. Thus, we might reasonably expect that the pairwise constraints used in a real semi-supervised community detection task could be imperfect or conflicting. The aim of this study is to explore the effect of noisy, incorrectly-labeled constraints on the performance of semisupervised community finding algorithms for overlapping networks. Furthermore, we propose an approach to mitigate such cases in real-world network analysis tasks. We treat noisy pairwise constraints as anomalies, and use an autoencoder, a commonlyused method in the domain of anomaly detection, to identify such constraints. Initial experiments on synthetic network demonstrate the usefulness of this approach.Science Foundation IrelandUpdate citation details during checkdate report - A

    Robust 3D U-Net Segmentation of Macular Holes

    Get PDF
    Macular holes are a common eye condition which result in visual impairment. We look at the application of deep convolutional neural networks to the problem of macular hole segmentation. We use the 3D U-Net architecture as a basis and experiment with a number of design variants. Manually annotating and measuring macular holes is time consuming and error prone, taking dozens of minutes to annotate a single 3D scan. Previous automated approaches to macular hole segmentation take minutes to segment a single 3D scan. We found that, in less than one second, deep learning models generate significantly more accurate segmentations than previous automated approaches (Jaccard index boost of 0.08 − 0.09) and expert agreement (Jaccard index boost of 0.13 − 0.20). We also demonstrate that an approach of architectural simplification, by greatly simplifying the network capacity and depth, results in a model which is competitive with state-of-the-art models such as residual 3D U-Nets

    The battle of the buzzwords: A comparative review of the circular economy and the sharing economy concepts

    No full text
    corecore