1,903,965 research outputs found

    A Theory of Attribution

    Get PDF
    Attribution of economic joint effects is achieved with a random order model of their relative importance. Random order consistency and elementary axioms uniquely identify linear and proportional marginal attribution. These are the Shapley (1953) and proportional (Feldman (1999, 2002) and Ortmann (2000)) values of the dual of the implied cooperative game. Random order consistency does not use a reduced game. Restricted potentials facilitate identification of proportional value derivatives and coalition formation results. Attributions of econometric model performance, using data from Fair (1978), show stability across models. Proportional marginal attribution (PMA) is found to correctly identify factor relative importance and to have a role in model construction. A portfolio attribution example illuminates basic issues regarding utility attribution and demonstrates investment applications. PMA is also shown to mitigate concerns (e.g., Thomas (1977)) regarding strategic behavior induced by linear cost attribution.Coalition formation; consistency; cost allocation; joint effects; proportional value; random order model; relative importance; restricted potential; Shapley value and variance decomposition

    ????????? ?????? ???????????? ?????? ???????????? ?????? ??????????????? ???????????? ????????? ???????????? ??????

    Get PDF
    Department of Computer Science and EngineeringAs deep learning has grown fast, so did the desire to interpret deep learning black boxes. As a result, many analysis tools have emerged to interpret it. Interpretation in deep learning has in fact popularized the use of deep learning in many areas including research, manufacturing, finance, and healthcare which needs relatively accurate and reliable decision making process. However, there is something we should not overlook. It is uncertainty. Uncertainties of models are directly reflected in the results of interpretations of model decision as explaining tools are dependent to models. Therefore, uncertainties of interpreting output from deep learning models should be also taken into account as quality and cost are directly impacted by measurement uncertainty. This attempt has not been made yet. Therefore, we suggest Bayesian input attribution rather than discrete input attribution by approximating Bayesian inference in deep Gaussian process through dropout to input attribution in this paper. Then we extract candidates that can sufficiently affect the output of the model, taking into account both input attribution itself and uncertainty of it.clos

    Quantifying statistical uncertainty in the attribution of human influence on severe weather

    Get PDF
    Event attribution in the context of climate change seeks to understand the role of anthropogenic greenhouse gas emissions on extreme weather events, either specific events or classes of events. A common approach to event attribution uses climate model output under factual (real-world) and counterfactual (world that might have been without anthropogenic greenhouse gas emissions) scenarios to estimate the probabilities of the event of interest under the two scenarios. Event attribution is then quantified by the ratio of the two probabilities. While this approach has been applied many times in the last 15 years, the statistical techniques used to estimate the risk ratio based on climate model ensembles have not drawn on the full set of methods available in the statistical literature and have in some cases used and interpreted the bootstrap method in non-standard ways. We present a precise frequentist statistical framework for quantifying the effect of sampling uncertainty on estimation of the risk ratio, propose the use of statistical methods that are new to event attribution, and evaluate a variety of methods using statistical simulations. We conclude that existing statistical methods not yet in use for event attribution have several advantages over the widely-used bootstrap, including better statistical performance in repeated samples and robustness to small estimated probabilities. Software for using the methods is available through the climextRemes package available for R or Python. While we focus on frequentist statistical methods, Bayesian methods are likely to be particularly useful when considering sources of uncertainty beyond sampling uncertainty.Comment: 41 pages, 11 figures, 1 tabl

    Attributions as Behavior Explanations: Toward a New Theory

    Get PDF
    Attribution theory has played a major role in social-psychological research. Unfortunately, the term attribution is ambiguous. According to one meaning, forming an attribution is making a dispositional (trait) inference from behavior; according to another meaning, forming an attribution is giving an explanation (especially of behavior). The focus of this paper is on the latter phenomenon of behavior explanations. In particular, I discuss a new theory of explanation that provides an alternative to classic attribution theory as it dominates the textbooks and handbooks—which is typically as a version of Kelley’s (1967) model of attribution as covariation detection. I begin with a brief critique of this theory and, out of this critique, develop a list of requirements that an improved theory has to meet. I then introduce the new theory, report empirical data in its support, and apply it to a number of psychological phenomena. I finally conclude with an assessment of how much progress we have made in understanding behavior explanations and what has yet to be learned

    Attribution styles as correlates of technical drawing task-persistence and technical college students’ performance

    Get PDF
    Technical drawing is a means of communicating between the designer and the manufacturers to bring ideas into reality by means of drafting. This study investigated attribution styles as collates of students’ technical drawing task-persistence and academic performance using correlational research design. The population for this study consisted of 864 students of year II and the sample study comprised of 150 (93 males and 57 females) randomly selected from six technical colleges in Edo State, Nigeria. Three instruments, Academic Performance Attribution Style Questionnaire (APASQ), Technical Drawing Taskpersistent Rating Scale (TDTPRS); and Technical Drawing Performance Test (TDPT) were developed and used for data collection. Cronbach Alpha reliability method was used to determine the reliability of the instruments and the results were obtained: SAASQ = .87; TDTPRS=.79; AND TDAT = .85. The findings of the study revealed that the technical drawing task-persistence of students was positively correlated by functional attribution style; and was negatively correlated by dysfunctional attribution style; functional attribution style positively correlated academic performance of students. Based on the findings of the study, it was recommended among others that technical drawing teachers should model and teach the students the right attribution style that will enhance their learning of technical drawing

    Conditional Complexity of Compression for Authorship Attribution

    Get PDF
    We introduce new stylometry tools based on the sliced conditional compression complexity of literary texts which are inspired by the nearly optimal application of the incomputable Kolmogorov conditional complexity (and presumably approximates it). Whereas other stylometry tools can occasionally be very close for different authors, our statistic is apparently strictly minimal for the true author, if the query and training texts are sufficiently large, compressor is sufficiently good and sampling bias is avoided (as in the poll samplings). We tune it and test its performance on attributing the Federalist papers (Madison vs. Hamilton). Our results confirm the previous attribution of Federalist papers by Mosteller and Wallace (1964) to Madison using the Naive Bayes classifier and the same attribution based on alternative classifiers such as SVM, and the second order Markov model of language. Then we apply our method for studying the attribution of the early poems from the Shakespeare Canon and the continuation of Marlowe’s poem ‘Hero and Leander’ ascribed to G. Chapman.compression complexity, authorship attribution.

    Authorship Attribution Using a Neural Network Language Model

    Full text link
    In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2:5% reduction in perplexity and increases author classification accuracy by 3:43% on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data. The source code, preprocessed datasets, a detailed description of the methodology and results are available at https://github.com/zge/authorship-attribution.Comment: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI'16
    corecore