18,191 research outputs found

    Methods and Tools for the Microsimulation and Forecasting of Household Expenditure

    Get PDF
    This paper reviews potential methods and tools for the microsimulation and forecasting of household expenditure. It begins with a discussion of a range of approaches to the forecasting of household populations via agent-based modelling tools. Then it evaluates approaches to the modelling of household expenditure. A prototype implementation is described and the paper concludes with an outline of an approach to be pursued in future work

    Methods and Tools for the Microsimulation and Forecasting of Household Expenditure - A Review

    Get PDF
    This paper reviews potential methods and tools for the microsimulation and forecasting of household expenditure. It begins with a discussion of a range of approaches to the forecasting of household populations via agent-based modelling tools. Then it evaluates approaches to the modelling of household expenditure. A prototype implementation is described and the paper concludes with an outline of an approach to be pursued in future work

    Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary

    Full text link
    Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.Comment: 5 pages with 2 pages reference. Accepted to appear in ACL 201
    • 

    corecore