192 research outputs found

    Profitable failure: antidepressant drugs and the triumph of flawed experiments

    Get PDF
    Drawing on an analysis of Irving Kirsch and colleagues? controversial 2008 article in PLoS [Public Library of Science] Medicine on the efficacy of SSRI antidepressant drugs such as Prozac, I examine flaws within the methodologies of randomized controlled trials (RCTs) that have made it difficult for regulators, clinicians and patients to determine the therapeutic value of this class of drug. I then argue, drawing analogies to work by Pierre Bourdieu and Michael Power, that it is the very limitations of RCTs ? their inadequacies in producing reliable evidence of clinical effects ? that help to strengthen assumptions of their superiority as methodological tools. Finally, I suggest that the case of RCTs helps to explore the question of why failure is often useful in consolidating the authority of those who have presided over that failure, and why systems widely recognized to be ineffective tend to assume greater authority at the very moment when people speak of their malfunction

    Maintaining (locus of) control? : Assessing the impact of locus of control on education decisions and wages

    Full text link
    This paper establishes that individuals with an internal locus of control, i.e., who believe that reinforcement in life comes from their own actions instead of being determined by luck or destiny, earn higher wages. However, this positive effect only translates into labor income via the channel of education. Factor structure models are implemented on an augmented data set coming from two different samples. By so doing, we are able to correct for potential biases that arise due to reverse causality and spurious correlation, and to investigate the impact of premarket locus of control on later outcomes

    Golden Rule of Forecasting: Be Conservative

    Get PDF
    This article proposes a unifying theory, or the Golden Rule, or forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century

    QuantCrit: education, policy, ‘Big Data’ and principles for a critical race theory of statistics

    Get PDF
    Quantitative research enjoys heightened esteem among policy-makers, media and the general public. Whereas qualitative research is frequently dismissed as subjective and impressionistic, statistics are often assumed to be objective and factual. We argue that these distinctions are wholly false; quantitative data is no less socially constructed than any other form of research material. The first part of the paper presents a conceptual critique of the field with empirical examples that expose and challenge hidden assumptions that frequently encode racist perspectives beneath the façade of supposed quantitative objectivity. The second part of the paper draws on the tenets of Critical Race Theory (CRT) to set out some principles to guide the future use and analysis of quantitative data. These ‘QuantCrit’ ideas concern (1) the centrality of racism as a complex and deeply-rooted aspect of society that is not readily amenable to quantification; (2) numbers are not neutral and should be interrogated for their role in promoting deficit analyses that serve White racial interests; (3) categories are neither ‘natural’ nor given and so the units and forms of analysis must be critically evaluated; (4) voice and insight are vital: data cannot ‘speak for itself’ and critical analyses should be informed by the experiential knowledge of marginalized groups; (5) statistical analyses have no inherent value but can play a role in struggles for social justice

    From Wald to Savage: homo economicus becomes a Bayesian statistician

    Get PDF
    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Wald’s behaviorist approach to statistics and culminates with Leonard J. Savage’s elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latter’s acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools

    The Relationship between Environmental Efficiency and Manufacturing Firm's Growth

    Full text link
    • …
    corecore