189,350 research outputs found

    Solving nonlinear rational expectations models by eigenvalue-eigenvector decompositions

    Get PDF
    We provide a summarized presentation of solution methods for rational expectations models, based on eigenvalue/eigenvector decompositions. These methods solve systems of stochastic linear difference equations by relying on the use of stability conditions derived from the eigenvectors associated to unstable eigenvalues of the coefficient matrices in the system. For nonlinear models, a linear approximation must be obtained, and the stability conditions are approximate, This is however, the only source of approximation error, since the nonlinear structure of the original model is used to produce the numerical solution. After applying the method to a baseline stochastic growth model, we explain how it can be used: i) to salve some identification problems that may arise in standard growth models, and ii) to solve endogenous growth models

    Domain-Adversarial Training of Neural Networks

    Full text link
    We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.Comment: Published in JMLR: http://jmlr.org/papers/v17/15-239.htm

    A graph-based factor screening method for synchronous data flow simulation models

    Get PDF
    This thesis develops a method for identifying important input factors in large system dynamics models from an analysis based on those models\u27 underlying structures. The identification of important input factors is commonly called factor screening and is a key step in the analysis of simulation models with many input parameters. Models under investigation are system dynamics models implemented as synchronous data flow programs, a model of computation that requires encoding the model components\u27 dependencies in a graph format. The developed method views this graph as a stochastic process and attempts to rank the importance of inputs, or source nodes, with respect to an output, or non-source node. This ranking is accomplished primarily through the use of weighted random-walks through the graph. A comparison is made against other factor screening techniques, including fractional factorial experiments. The presented structure-based method is found to be comparably accurate to statistical factor screen experiments at magnitude order ranking. Run time of the developed method compared against a resolution III fractional factorial design is found to be similar for small models, and significantly faster for large models
    • …
    corecore