8 research outputs found

    Informed pair selection for self-paced metric learning in Siamese neural networks.

    Get PDF
    Siamese Neural Networks (SNNs) are deep metric learners that use paired instance comparisons to learn similarity. The neural feature maps learnt in this way provide useful representations for classification tasks. Learning in SNNs is not reliant on explicit class knowledge; instead they require knowledge about the relationship between pairs. Though often ignored, we have found that appropriate pair selection is crucial to maximising training efficiency, particularly in scenarios where examples are limited. In this paper, we study the role of informed pair selection and propose a 2-phased strategy of exploration and exploitation. Random sampling provides the needed coverage for exploration, while areas of uncertainty modeled by neighbourhood properties of the pairs drive exploitation. We adopt curriculum learning to organise the ordering of pairs at training time using similarity knowledge as a heuristic for pair sorting. The results of our experimental evaluation show that these strategies are key to optimising training

    GramError: a quality metric for machine generated songs.

    Get PDF
    This paper explores whether a simple grammar-based metric can accurately predict human opinion of machine-generated song lyrics quality. The proposed metric considers the percentage of words written in natural English and the number of grammatical errors to rate the quality of machine-generated lyrics. We use a state-of-the-art Recurrent Neural Network (RNN) model and adapt it to lyric generation by re-training on the lyrics of 5,000 songs. For our initial user trial, we use a small sample of songs generated by the RNN to calibrate the metric. Songs selected on the basis of this metric are further evaluated using ”Turinglike” tests to establish whether there is a correlation between metric score and human judgment. Our results show that there is strong correlation with human opinion, especially at lower levels of song quality. They also show that 75% of the RNN-generated lyrics passed for human-generated over 30% of the time

    A holistic metric approach to solving the dynamic location-allocation problem.

    Get PDF
    In this paper, we introduce a dynamic variant of the Location-Allocation problem: Dynamic Location-Allocation Problem (DULAP). DULAP involves the location of facilities to service a set of customer demands over a defined horizon. To evaluate a solution to DULAP, we propose two holistic metric approaches: Static and Dynamic Approach. In the static approach, a solution is evaluated with the assumption that customer locations and demand remain constant over a defined horizon. In the dynamic approach, the assumption is made that customer demand, and demographic pattern may change over the defined horizon. We introduce a stochastic model to simulate customer population and distribution over time. We use a Genetic Algorithm and Population-Based Incremental Learning algorithm used in previous work to find robust and satisfactory solutions to DULAP. Results show the dynamic approach of evaluating a solution finds good and robust solutions

    Risk information recommendation for engineering workers.

    Get PDF
    Within any sufficiently expertise-reliant and work-driven domain there is a requirement to understand the similarities between specific work tasks. Though mechanisms to develop similarity models for these areas do exist, in practice they have been criticised within various domains by experts who feel that the output is not indicative of their viewpoint. In field service provision for telecommunication organisations, it can be particularly challenging to understand task similarity from the perspective of an expert engineer. With that in mind, this paper demonstrates a similarity model developed from text recorded by engineer’s themselves to develop a metric directly indicative of expert opinion. We evaluate several methods of learning text representations on a classification task developed from engineers' notes. Furthermore, we introduce a means to make use of the complex and multi-faceted aspect of the notes to recommend additional information to support engineers in the field

    Integrating Transformations in Probabilistic Circuits

    Full text link
    This study addresses the predictive limitation of probabilistic circuits and introduces transformations as a remedy to overcome it. We demonstrate this limitation in robotic scenarios. We motivate that independent component analysis is a sound tool to preserve the independence properties of probabilistic circuits. Our approach is an extension of joint probability trees, which are model-free deterministic circuits. By doing so, it is demonstrated that the proposed approach is able to achieve higher likelihoods while using fewer parameters compared to the joint probability trees on seven benchmark data sets as well as on real robot data. Furthermore, we discuss how to integrate transformations into tree-based learning routines. Finally, we argue that exact inference with transformed quantile parameterized distributions is not tractable. However, our approach allows for efficient sampling and approximate inference

    Context extraction for aspect-based sentiment analytics: combining syntactic, lexical and sentiment knowledge.

    Get PDF
    Aspect-level sentiment analysis of customer feedback data when done accurately can be leveraged to understand strong and weak performance points of businesses and services and also formulate critical action steps to improve their performance. In this work we focus on aspect-level sentiment classification studying the role of opinion context extraction for a given aspect and the extent to which traditional and neural sentiment classifiers benefit when trained using the opinion context text. We introduce a novel method that combines lexical, syntactical and sentiment knowledge effectively to extract opinion context for aspects. Thereafter we validate the quality of the opinion contexts extracted with human judgments using the BLEU score. Further we evaluate the usefulness of the opinion contexts for aspect-sentiment analysis. Our experiments on benchmark data sets from SemEval and a real-world dataset from the insurance domain suggests that extracting the right opinion context combining syntactical with sentiment co-occurrence knowledge leads to the best aspect-sentiment classification performance. From a commercial point of view, accurate aspect extraction, provides an elegant means to identify 'pain-points' in a business. Integrating our work into a commercial CX platform (https://www.sentisum.com/) is enabling the company’s clients to better understand their customer opinions

    Hierarchical Bias-Driven Stratification for Interpretable Causal Effect Estimation

    Full text link
    Interpretability and transparency are essential for incorporating causal effect models from observational data into policy decision-making. They can provide trust for the model in the absence of ground truth labels to evaluate the accuracy of such models. To date, attempts at transparent causal effect estimation consist of applying post hoc explanation methods to black-box models, which are not interpretable. Here, we present BICauseTree: an interpretable balancing method that identifies clusters where natural experiments occur locally. Our approach builds on decision trees with a customized objective function to improve balancing and reduce treatment allocation bias. Consequently, it can additionally detect subgroups presenting positivity violations, exclude them, and provide a covariate-based definition of the target population we can infer from and generalize to. We evaluate the method's performance using synthetic and realistic datasets, explore its bias-interpretability tradeoff, and show that it is comparable with existing approaches

    USEFUL MEASURES OF COMPLEXITY: A MODEL OF ASSESSING DEGREE OF COMPLEXITY IN ENGINEERED SYSTEMS AND ENGINEERING PROJECTS

    Get PDF
    Many modern systems are very complex, a reality which can affect their safety and reliability of operations. Systems engineers need new ways to measure problem complexity. This research lays the groundwork for measuring the complexity of systems engineering (SE) projects. This research proposes a project complexity measurement model (PCMM) and associated methods to measure complexity. To develop the PCMM, we analyze four major types of complexity (structural complexity, temporal complexity, organizational complexity, and technological complexity) and define a set of complexity metrics. Through a survey of engineering projects, we also develop project profiles for three types of software projects typically used in the U.S. Navy to provide empirical evidence for the PCMM. The results of our work on these projects show that as a project increases in complexity, the more difficult and expensive it is for a project to meet all requirements and schedules because of changing interactions and dynamics among the project participants and stakeholders. The three projects reveal reduction of project complexity by setting a priority and a baseline in requirements and project scope, concentrating on the expected deliverable, strengthening familiarity of the systems engineering process, eliminating redundant processes, and clarifying organizational roles and decision-making processes to best serve the project teams while also streamlining on business processes and information systems.Civilian, Department of the NavyApproved for public release. Distribution is unlimited
    corecore