14,889 research outputs found

    Distinguishing Overconfidence from Rational Best-Response in Markets

    Get PDF
    This paper studies the causal effect of individuals' overconfidence and bounded rationality on asset markets. To do that, we combine a new market mechanism with an experimental design, where (1) players' interaction is centered on the inferences they make about each others' information, (2) overconfidence in private information is controlled by the experimenter (i.e., used as a treatment), and (3) natural analogs to prices, returns and volume exist. We find that in sessions where subjects are induced to be overconfident, volume and price error analogs are higher than predicted by the fully-rational model. However, qualitatively similar results are obtained in sessions where there is no aggregate overconfidence. To explain this, we suggest an alternative possibility: participants strategically respond to the errors contained in others' actions by rationally discounting the informativeness of these actions. Estimating a structural model of individuals' decisions that allows for both overconfidence and errors, we are able to separate these two channels. We find that a substantial fraction of excess volume and price error analogs is attributable to strategic response to errors, while the remaining is attributable to overconfidence. Further, we show that price analog exhibit serial autocorrelation only in the overconfidence-induced sessions.

    An Introduction to Mechanized Reasoning

    Get PDF
    Mechanized reasoning uses computers to verify proofs and to help discover new theorems. Computer scientists have applied mechanized reasoning to economic problems but -- to date -- this work has not yet been properly presented in economics journals. We introduce mechanized reasoning to economists in three ways. First, we introduce mechanized reasoning in general, describing both the techniques and their successful applications. Second, we explain how mechanized reasoning has been applied to economic problems, concentrating on the two domains that have attracted the most attention: social choice theory and auction theory. Finally, we present a detailed example of mechanized reasoning in practice by means of a proof of Vickrey's familiar theorem on second-price auctions

    The Configurable SAT Solver Challenge (CSSC)

    Get PDF
    It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014

    Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability

    Full text link
    Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target, without requiring access to the source data. Indeed, HTL relies only on a hypothesis learnt from such source data, relieving the hurdle of expansive data storage and providing great practical benefits. Hence, HTL is highly beneficial for real-world applications relying on big data. The analysis of such a method from a theoretical perspective faces multiple challenges, particularly in classification tasks. This paper deals with this problem by studying the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis. In particular, we are interested in the statistical behaviour of the regularized empirical risk minimizers in the case of binary classification. Our stability analysis provides learning guarantees under mild assumptions. Consequently, we derive several complexity-free generalization bounds for essential statistical quantities like the training error, the excess risk and cross-validation estimates. These refined bounds allow understanding the benefits of transfer learning and comparing the behaviour of standard losses in different scenarios, leading to valuable insights for practitioners

    First-year composition and transfer: a quantitative study

    Get PDF
    The present study investigated the effect of writing pedagogy on transfer by examining the effect of pedagogical orientation (WAC/WID or ‘traditional’) on content-area grades. Participants were 1,052 undergraduates from 17 schools throughout the United States. Hypothesis was that the WAC/WID orientation would lead to higher transfer levels as measured by participants’ higher content-area performance. Composition grades were collected in year one; content-area grades where collected in year two. Propensity scores were calculated to stratify the groups and minimize selection bias of writing-class assignment, thereby allowing quasi-causal inference. An ANOVA was performed on the resulting 2-by-5 stratified data. Results indicated that students who completed the WAC/WID composition classes received significantly higher content grades than those in the ‘traditional’ writing classes. The results confirmed the hypothesis
    corecore