3,777 research outputs found
Micro, Macro, and Mechanisms
This article, which takes a fresh look at micro–macro relations in the social sciences from the point of view of the mechanistic account of explanation, introduces the distinction between causal and constitutive explanation. It then discusses the intentional fundamentalism, and challenges the idea that intentional explanations have a privileged position in the social sciences. A mechanism-based explanation describes the causal process selectively. The properties of social networks serve both as the explananda and the explanantia in sociology. Knowledge of the causal mechanisms is vital in the justification of historical causal claims. The intentional attitudes of individuals are also important in most mechanism-based explanations of social phenomena. It is important to pay closer attention to how real macro social facts figure in social scientific theories and explanations.Peer reviewe
Review of Individuals and Identity in Economics by John B. Davis
Book review. Reviewed work: Individuals and Identity in Economics / John B. Davis. - Cambridge University Press, 2011.Non peer reviewe
Generative Explanation and Individualism in Agent-Based Simulation
Social scientists associate agent-based simulation (ABS) models with three ideas about explanation: they provide generative explanations, they are models of mechanisms, and they implement methodological individualism. In light of a philosophical account of explanation, we show that these ideas are not necessarily related and offer an account of the explanatory import of ABS models. We also argue that their bottom-up research strategy should be distinguished from methodological individualism.Peer reviewe
Explanatory relevance across disciplinary boundaries
Many of the arguments for neuroeconomics rely on mistaken assumptions about criteria of explanatory relevance across disciplinary boundaries and fail to distinguish between evidential and explanatory relevance. Building on recent philosophical work on mechanistic research programmes and the contrastive counterfactual theory of explanation, we argue that explaining an explanatory presupposition or providing a lower-level explanation does not necessarily constitute explanatory improvement. Neuroscientific findings have explanatory relevance only when they inform a causal and explanatory account of the psychology of human decision-making.Peer reviewe
Humanistic interpretation and machine learning
This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the researchers’ judgments from the process of producing evidence for social scientific theories. The paper shows this by distinguishing between two prevalent attitudes toward topic modeling, i.e., topic realism and topic instrumentalism. Under neither can modeling provide social scientific evidence without the researchers’ interpretive engagement with the original text materials. Thus the unsupervised text analysis cannot improve the objectivity of interpretation by alleviating the problem of underdetermination in interpretive debate. The paper argues that the sense in which unsupervised methods can improve objectivity is by providing researchers with the resources to justify to others that their interpretations are correct. This kind of objectivity seeks to reduce suspicions in collective debate that interpretations are the products of arbitrary processes influenced by the researchers’ idiosyncratic decisions or starting points. The paper discusses this view in relation to alternative approaches to formalizing interpretation and identifies several limitations on what unsupervised learning can be expected to achieve in terms of supporting interpretive work.This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the researchers’ judgments from the process of producing evidence for social scientific theories. The paper shows this by distinguishing between two prevalent attitudes toward topic modeling, i.e., topic realism and topic instrumentalism. Under neither can modeling provide social scientific evidence without the researchers’ interpretive engagement with the original text materials. Thus the unsupervised text analysis cannot improve the objectivity of interpretation by alleviating the problem of underdetermination in interpretive debate. The paper argues that the sense in which unsupervised methods can improve objectivity is by providing researchers with the resources to justify to others that their interpretations are correct. This kind of objectivity seeks to reduce suspicions in collective debate that interpretations are the products of arbitrary processes influenced by the researchers’ idiosyncratic decisions or starting points. The paper discusses this view in relation to alternative approaches to formalizing interpretation and identifies several limitations on what unsupervised learning can be expected to achieve in terms of supporting interpretive work.Peer reviewe
- …