11 research outputs found

    Commonsense Explanations of Sparsity, Zipf Law, and Nash\u27s Bargaining Solution

    No full text
    As econometric models become more and more accurate and more and more mathematically complex, they also become less and less intuitively clear and convincing. To make these models more convincing, it is desirable to supplement the corresponding mathematics with commonsense explanations. In this paper, we provide such explanation for three economics-related concepts: sparsity (as in LASSO), Zipf\u27s Law, and Nash\u27s bargaining solution

    Fuzzy Techniques Explain Empirical Power Law Governing Wars and Terrorist Attacks

    No full text
    The empirical distribution of the number of casualties in wars and terrorist attacks follows a power law with exponent 2.5. So far, there has not been a convincing explanation for this empirical fact. In this paper, we show that by using fuzzy techniques, we can explain this exponent. Interesting, we can also get a similar explanation if we use probabilistic techniques. The fact that two different techniques lead to the same explanation makes us reasonably confident that this explanation is correct

    A Possible Common Mechanism Behind Skew Normal Distributions in Economics and Hydraulic Fracturing-Induced Seismicity

    No full text
    Many economic situations -- and many situations in other application areas -- are well-described by a special asymmetric generalization of normal distributions -- known as skew-normal. However, there is no convincing theoretical explanation for this empirical phenomenon. To be more precise, there are convincing explanations for the ubiquity of normal distributions, but not for the transformation that turns normal into skew-normal. In this paper, we use the analysis of hydraulic fracturing-induced seismicity to show explain the ubiquity of such a transformation

    Fuzzy Sets As Strongly Consistent Random Sets

    No full text
    It is known that from the purely mathematical viewpoint, fuzzy sets can be interpreted as equivalent classes of random sets. This interpretations helps to teach fuzzy techniques to statisticians and also enables us to apply results about random sets to fuzzy techniques. The problem with this interpretation is that it is too complicated: a random set is not an easy notion, and classes of random sets are even more complex. This complexity goes against the spirit of fuzzy sets, whose purpose was to be simple and intuitively clear. From this viewpoint, it is desirable to simplify this interpretation. In this paper, we show that the random-set interpretation of fuzzy techniques can indeed be simplified: namely, we can show that fuzzy sets can be interpreted not as classes, but as strongly consistent random sets (in some reasonable sense). This is not yet at the desired level of simplicity, but this new interpretation is much simpler than the original one and thus, constitutes an important step towards the desired simplicity

    Correcting Interval-Valued Expert Estimates: Empirical Formulas Explained

    No full text
    Experts\u27 estimates are approximate. To make decisions based on these estimates, we need to know how accurate these estimate are. Sometimes, experts themselves estimate the accuracy of their estimates -- by providing the interval of possible values instead of a single number. In other cases, we can gauge the accuracy of the experts\u27 estimates by asking several experts to estimates the same quantity and using the interval range of these values. In both situations, sometimes the interval is too narrow -- e.g., if an expert is overconfident. Sometimes, the interval is too wide -- if the expert is too cautious. In such situations, we correct these intervals, by making them, correspondingly, wider or narrower. Empirical studies show that people use specific formulas for such corrections. In this paper, we provide a theoretical explanation for these empirical formulas

    How to Deal with Inconsistent Intervals: Utility-Based Approach Can Overcome the Limitations of the Purely Probability-Based Approach

    No full text
    In many application areas, we rely on experts to estimate the numerical values of some quantities. Experts can provide not only the estimates themselves, they can also estimate the accuracies of their estimates -- i.e., in effect, they provide an interval of possible values of the quantity of interest. To get a more accurate estimate, it is reasonable to ask several experts -- and to take the intersection of the resulting intervals. In some cases, however, experts overestimate the accuracy of their estimates, their intervals are too narrow -- so narrow that they are inconsistent: their intersection is empty. In such situations, it is necessary to extend the experts\u27 intervals so that they will become consistent. Which extension should we choose? Since we are dealing with uncertainty, it seems reasonable to apply probability-based approach -- well suited for dealing with uncertainty. From the purely mathematical viewpoint, this application is possible -- however, as we show, even in simplest situations, it leads to counter-intuitive results. We show that we can make more reasonable recommendations if, instead of only taking into account probabilities, we also take into account our preferences -- which, according to decision theory, can be described by utilities

    Why Quantiles Are a Good Description of Volatility in Economics: An Alternative Explanation

    No full text
    In econometrics, volatility of an investment is usually described by its Value-at-Risk (VaR), i.e., by an appropriate quantile of the corresponding probability distribution. The motivations for selecting VaR are largely empirical: VaR provides a more adequate description of what people intuitively perceive as risk. In this paper, we analyze this situation from the viewpoint of decision theory, and we show that this analysis naturally leads to the Value-at-Risk, i.e., to a quantile. Interestingly, this analysis also naturally leads to an optimization problem related to quantile regression

    Why Geometric Progression in Selecting the LASSO Parameter: A Theoretical Explanation

    No full text
    In situations when we know which inputs are relevant, the least squares method is often the best way to solve linear regression problems. However, in many practical situations, we do not know beforehand which inputs are relevant and which are not. In such situations, a 1-parameter modification of the least squares method known as LASSO leads to more adequate results. To use LASSO, we need to determine the value of the LASSO parameter that best fits the given data. In practice, this parameter is determined by trying all the values from some discrete set. It has been empirically shown that this selection works the best if we try values from a geometric progression. In this paper, we provide a theoretical explanation for this empirical fact

    Uncertain Information Fusion and Knowledge Integration: How to Take Reliability into Account

    No full text
    In many practical situations, we need to fuse and integrate information and knowledge from different sources -- and do it under uncertainty. Most existing methods for information fusion and knowledge integration take into account uncertainty. In addition to uncertainty, we also face the problem of reliability: sensors may malfunction, experts can be wrong, etc. In this paper, we show how to take into account both uncertainty and reliability in information fusion and knowledge integration. We show this on the examples of probabilistic and fuzzy uncertainty

    Quantum Econometrics: How to Explain Its Quantitative Successes and How the Resulting Formulas Are Related to Scale Invariance, Entropy, Fuzzy, and Copulas

    No full text
    Many aspects of human behavior seem to be well-described by formulas of quantum physics. In this paper, we explain this phenomenon by showing that the corresponding quantum-looking formulas can be derived from the general ideas of scale invariance, fuzziness, and copulas. We also use these ideas to derive a general family of formulas that include non-quantum and quantum probabilities as particular cases -- formulas that may be more adequate for describing human behavior than purely non-quantum or purely quantum ones
    corecore