2,925 research outputs found

    Learning from Ontology Streams with Semantic Concept Drift

    Get PDF
    Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing

    Yuan Real Exchange Rate Undervaluation, 1997-2006. How Much, How Often? Not Much, Not Often

    Full text link
    Yuan real effective exchange rate misalignment is esitimated in a behavioral equilibrium exchange rate (BEER) model for the period 1997 to third quarter 2007. Using the Beveridge-Nelson decomposition a vector error correction model (VECM) of the exchange rate as a function of macroeconomic fundamentals, including government expenditures, economic openness, the balance of trade surplus, and net foreign assets, is estimated. We find that the Chinese Yuan has been fluctuating moderately around its long run equilibrium value with undervaluation up to 4% and overvaluation up to 6% at various points in time since 1997. This result is consistent with findings of many of the most recent studies employing alternative econometric methodologies to determine the equilibrium exchange rate. While the Yuan real effective exchange rate has deviated from equilibrium, and it is sticky, taking over five years to correct 50% of the short run misalignment, it does not appear to have been consistently undervalued as has been widely argued.http://deepblue.lib.umich.edu/bitstream/2027.42/64348/1/wp934.pd

    Yuan Real Exchange Rate Undervaluation, 1997-2006. How Much, How Often? Not Much, Not Often

    Get PDF
    Yuan real effective exchange rate misalignment is esitimated in a behavioral equilibrium exchange rate (BEER) model for the period 1997 to third quarter 2007. Using the Beveridge-Nelson decomposition a vector error correction model (VECM) of the exchange rate as a function of macroeconomic fundamentals, including government expenditures, economic openness, the balance of trade surplus, and net foreign assets, is estimated. We find that the Chinese Yuan has been fluctuating moderately around its long run equilibrium value with undervaluation up to 4% and overvaluation up to 6% at various points in time since 1997. This result is consistent with findings of many of the most recent studies employing alternative econometric methodologies to determine the equilibrium exchange rate. While the Yuan real effective exchange rate has deviated from equilibrium, and it is sticky, taking over five years to correct 50% of the short run misalignment, it does not appear to have been consistently undervalued as has been widely argued.Chinese Yuan, Exchange Rate, Misalignment, BEER, Behavioral, Cointegration, ARIMA, VECM, FGLS.

    Knowledge-based Transfer Learning Explanation

    Get PDF
    Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper, we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DBpedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.Comment: Accepted by International Conference on Principles of Knowledge Representation and Reasoning, 201

    Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

    Full text link
    While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. This paper proposes a solution by introducing a family of safe mutation (SM) operators that aim within the mutation operator itself to find a degree of change that does not alter network behavior too much, but still facilitates exploration. Importantly, these SM operators do not require any additional interactions with the environment. The most effective SM variant capitalizes on the intriguing opportunity to scale the degree of mutation of each individual weight according to the sensitivity of the network's outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks (which tend to be particularly brittle to mutation), including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution

    ES Is More Than Just a Traditional Finite-Difference Approximator

    Full text link
    An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. Because it resembles a traditional finite-difference approximation of the reward gradient, it can naturally be confused with one. However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation. This difference can channel ES into distinct areas of the search space relative to gradient descent, and also consequently to networks with distinct properties. This unique robustness-seeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradient-based reinforcement learning are significantly less robust to parameter perturbation than ES-based policies solving the same task. While the implications of such robustness and robustness-seeking remain open to further study, this work's main contribution is to highlight such differences and their potential importance
    corecore