1,146,412 research outputs found

    Uplift Modeling with Multiple Treatments and General Response Types

    Full text link
    Randomized experiments have been used to assist decision-making in many areas. They help people select the optimal treatment for the test population with certain statistical guarantee. However, subjects can show significant heterogeneity in response to treatments. The problem of customizing treatment assignment based on subject characteristics is known as uplift modeling, differential response analysis, or personalized treatment learning in literature. A key feature for uplift modeling is that the data is unlabeled. It is impossible to know whether the chosen treatment is optimal for an individual subject because response under alternative treatments is unobserved. This presents a challenge to both the training and the evaluation of uplift models. In this paper we describe how to obtain an unbiased estimate of the key performance metric of an uplift model, the expected response. We present a new uplift algorithm which creates a forest of randomized trees. The trees are built with a splitting criterion designed to directly optimize their uplift performance based on the proposed evaluation method. Both the evaluation method and the algorithm apply to arbitrary number of treatments and general response types. Experimental results on synthetic data and industry-provided data show that our algorithm leads to significant performance improvement over other applicable methods

    QuestionBank: creating a corpus of parse-annotated questions

    Get PDF
    This paper describes the development of QuestionBank, a corpus of 4000 parse-annotated questions for (i) use in training parsers employed in QA, and (ii) evaluation of question parsing. We present a series of experiments to investigate the effectiveness of QuestionBank as both an exclusive and supplementary training resource for a state-of-the-art parser in parsing both question and non-question test sets. We introduce a new method for recovering empty nodes and their antecedents (capturing long distance dependencies) from parser output in CFG trees using LFG f-structure reentrancies. Our main findings are (i) using QuestionBank training data improves parser performance to 89.75% labelled bracketing f-score, an increase of almost 11% over the baseline; (ii) back-testing experiments on non-question data (Penn-II WSJ Section 23) shows that the retrained parser does not suffer a performance drop on non-question material; (iii) ablation experiments show that the size of training material provided by QuestionBank is sufficient to achieve optimal results; (iv) our method for recovering empty nodes captures long distance dependencies in questions from the ATIS corpus with high precision (96.82%) and low recall (39.38%). In summary, QuestionBank provides a useful new resource in parser-based QA research

    Evolutionary data selection for enhancing models of intraday forex time series

    Get PDF
    The hypothesis in this paper is that a significant amount of intraday market data is either noise or redundant, and that if it is eliminated, then predictive models built using the remaining intraday data will be more accurate. To test this hypothesis, we use an evolutionary method (called Evolutionary Data Selection, EDS) to selectively remove out portions of training data that is to be made available to an intraday market predictor. After performing experiments in which data-selected and non-data-selected versions of the same predictive models are compared, it is shown that EDS is effective and does indeed boost predictor accuracy. It is also shown in the paper that building multiple models using EDS and placing them into an ensemble further increases performance. The datasets for evaluation are large intraday forex time series, specifically series from the EUR/USD, the USD/JPY and the EUR/JPY markets, and predictive models for two primary tasks per market are built: intraday return prediction and intraday volatility prediction

    Matching pursuit-based compressive sensing in a wearable biomedical accelerometer fall diagnosis device

    Get PDF
    There is a significant high fall risk population, where individuals are susceptible to frequent falls and obtaining significant injury, where quick medical response and fall information are critical to providing efficient aid. This article presents an evaluation of compressive sensing techniques in an accelerometer-based intelligent fall detection system modelled on a wearable Shimmer biomedical embedded computing device with Matlab. The presented fall detection system utilises a database of fall and activities of daily living signals evaluated with discrete wavelet transforms and principal component analysis to obtain binary tree classifiers for fall evaluation. 14 test subjects undertook various fall and activities of daily living experiments with a Shimmer device to generate data for principal component analysis-based fall classifiers and evaluate the proposed fall analysis system. The presented system obtains highly accurate fall detection results, demonstrating significant advantages in comparison with the thresholding method presented. Additionally, the presented approach offers advantageous fall diagnostic information. Furthermore, transmitted data accounts for over 80% battery current usage of the Shimmer device, hence it is critical the acceleration data is reduced to increase transmission efficiency and in-turn improve battery usage performance. Various Matching pursuit-based compressive sensing techniques have been utilised to significantly reduce acceleration information required for transmission.Scopu

    Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

    No full text
    International audienceClassic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult
    corecore