2,039 research outputs found

    Empirical models, rules, and optimization

    Get PDF
    This paper considers supply decisions by firms in a dynamic setting with adjustment costs and compares the behavior of an optimal control model to that of a rule-based system which relaxes the assumption that agents are explicit optimizers. In our approach, the economic agent uses believably simple rules in coping with complex situations. We estimate rules using an artificially generated sample obtained by running repeated simulations of a dynamic optimal control model of a firm's hiring/firing decisions. We show that (i) agents using heuristics can behave as if they were seeking rationally to maximize their dynamic returns; (ii) the approach requires fewer behavioral assumptions relative to dynamic optimization and the assumptions made are based on economically intuitive theoretical results linking rule adoption to uncertainty; (iii) the approach delineates the domain of applicability of maximization hypotheses and describes the behavior of agents in situations of economic disequilibrium. The approach adopted uses concepts from fuzzy control theory. An agent, instead of optimizing, follows Fuzzy Associative Memory (FAM) rules which, given input and output data, can be estimated and used to approximate any non-linear dynamic process. Empirical results indicate that the fuzzy rule-based system performs extremely well in approximating optimal dynamic behavior in situations with limited noise.Decision-making. ,econometric models ,TMD ,

    Combining Experiments and Simulations Using the Maximum Entropy Principle

    Get PDF
    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges

    The Maximum Lq-Likelihood Method: an Application to Extreme Quantile Estimation in Finance

    Get PDF
    Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum Lq-Likelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q?1, the new estimator approaches the traditionalMaximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6=1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error).Maximum Likelihood, Extreme Value Theory, q-Entropy, Tail-related Risk Measures

    The Maximum Lq-Likelihood Method: an Application to Extreme Quantile Estimation in Finance

    Get PDF
    Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on Extreme Value Theory (EVT) has found a successful domain of application in such a context, outperforming other approaches. Given a parametric model provided by EVT, a natural approach is Maximum Likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small in order to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the Maximum Lq-Likelihood estimator (MLqE), introduced by Ferrari and Yang (2007). We show that the MLqE can outperform the standard MLE, when estimating tail probabilities and quantiles of the Generalized Extreme Value (GEV) and the Generalized Pareto (GP) distributions. First, we assess the relative efficiency between the the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q→1, the new estimator approaches the traditionalMaximum Likelihood Estimator (MLE), recovering its desirable asymptotic properties; when q 6=1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (Mean Squared Error).Maximum Likelihood, Extreme Value Theory, q-Entropy, Tail-related Risk Measures

    A Novel Distributed Representation of News (DRNews) for Stock Market Predictions

    Full text link
    In this study, a novel Distributed Representation of News (DRNews) model is developed and applied in deep learning-based stock market predictions. With the merit of integrating contextual information and cross-documental knowledge, the DRNews model creates news vectors that describe both the semantic information and potential linkages among news events through an attributed news network. Two stock market prediction tasks, namely the short-term stock movement prediction and stock crises early warning, are implemented in the framework of the attention-based Long Short Term-Memory (LSTM) network. It is suggested that DRNews substantially enhances the results of both tasks comparing with five baselines of news embedding models. Further, the attention mechanism suggests that short-term stock trend and stock market crises both receive influences from daily news with the former demonstrates more critical responses on the information related to the stock market {\em per se}, whilst the latter draws more concerns on the banking sector and economic policies.Comment: 25 page

    A Probabilistic Model of Meter Perception: Simulating Enculturation

    Get PDF
    HH is supported by a Distinguished Lorentz fellowship granted by the Lorentz Center for the Sciences and the Netherlands Institute for Advanced Study in the Humanities and Social Sciences (NIAS) and a Horizon grant of the Netherlands Organization for Scientific Research (NWO). BW and MP also received support from the EPSRC Digital Music Platform Grant held at Queen Mary (EP/K009559/1). MP is supported by a grant from the UK Engineering and Physical Science Research Council (EPSRC, EP/M000702/1)

    Constrained Reweighting of Distributions: an Optimal Transport Approach

    Full text link
    We commonly encounter the problem of identifying an optimally weight adjusted version of the empirical distribution of observed data, adhering to predefined constraints on the weights. Such constraints often manifest as restrictions on the moments, tail behaviour, shapes, number of modes, etc., of the resulting weight adjusted empirical distribution. In this article, we substantially enhance the flexibility of such methodology by introducing a nonparametrically imbued distributional constraints on the weights, and developing a general framework leveraging the maximum entropy principle and tools from optimal transport. The key idea is to ensure that the maximum entropy weight adjusted empirical distribution of the observed data is close to a pre-specified probability distribution in terms of the optimal transport metric while allowing for subtle departures. The versatility of the framework is demonstrated in the context of three disparate applications where data re-weighting is warranted to satisfy side constraints on the optimization problem at the heart of the statistical task: namely, portfolio allocation, semi-parametric inference for complex surveys, and ensuring algorithmic fairness in machine learning algorithms.Comment: arXiv admin note: text overlap with arXiv:2303.1008
    corecore