18,313 research outputs found

    The precautionary principle. Between social norms and economic constructs

    Get PDF
    Cet article confronte les interprĂ©tations proposĂ©es pour le principe de prĂ©caution qui viennent de deux horizons diffĂ©rents : les thĂ©ories Ă©conomiques du risque qui s'inscrivent dans un cadre bayĂ©sien et les repĂšres heuristiques de la doctrine validĂ©e par les institutions europĂ©ennes et françaises. Les traits communs sont mis en Ă©vidence, mais aussi d'importantes diffĂ©rences quant aux concepts et aux contextes d'application. MalgrĂ© ces diffĂ©rences, l'analyse Ă©conomique propose des Ă©clairages utiles sur plusieurs questions controversĂ©es soulevĂ©es par la mise en Ɠuvre du principe de prĂ©caution comme norme sociale. Cela concerne par exemple la rĂ©versibilitĂ© des mesures de prĂ©caution, la question de l'application directe du principe Ă  toute personne ou aux seules autoritĂ©s publiques et le problĂšme de l'imputation de la charge de l'instruction scientifique des hypothĂšses de risque.

    Plato’s Response to the Third Man Argument in the Paradoxical Exercise of the Parmenides

    Get PDF
    An analysis of the Third Man Argument, especially in light of Constance Meinwald's book Plato's Parmenides. I argue that her solution to the TMA fails. Then I present my own theory as to what Plato's solution was

    Advancing functional connectivity research from association to causation

    Get PDF
    Cognition and behavior emerge from brain network interactions, such that investigating causal interactions should be central to the study of brain function. Approaches that characterize statistical associations among neural time series-functional connectivity (FC) methods-are likely a good starting point for estimating brain network interactions. Yet only a subset of FC methods ('effective connectivity') is explicitly designed to infer causal interactions from statistical associations. Here we incorporate best practices from diverse areas of FC research to illustrate how FC methods can be refined to improve inferences about neural mechanisms, with properties of causal neural interactions as a common ontology to facilitate cumulative progress across FC approaches. We further demonstrate how the most common FC measures (correlation and coherence) reduce the set of likely causal models, facilitating causal inferences despite major limitations. Alternative FC measures are suggested to immediately start improving causal inferences beyond these common FC measures

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures

    Applying science of learning in education: Infusing psychological science into the curriculum

    Get PDF
    The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings

    SUBTLE CUES AND HIDDEN ASSUMPTIONS: AN ACTION RESEARCH STUDY OF TEACHER QUESTIONING PATTERNS IN 7TH AND 8TH GRADE MATHEMATICS CLASSROOMS

    Get PDF
    This action research project explores the link between a teacher's questioning patterns and the modes of thinking, analyzing, evaluating and communicating that are developed in his 7th and 8th grade math students. The highly qualitative analysis focuses on three videotaped lessons from his 7th and 8th grade classrooms, and evaluates the lessons according to four categories or "lenses": cognitive demand, task completion, self-efficacy, and metacognitive activity. It then seeks to identify and codify the predominant questioning pattern used in each lesson, and connect this pattern to the levels of success exhibited in each of the four categories. Four principal patterns are observed and discussed in the lessons: Unilateral Inquiry Response Evaluation, Multilateral Inquiry Response Evaluation, Inquiry Response Collection, and Inquiry Response Revoicing Controversy. The fourth pattern is proposed as a tool for managing classroom discourse that involves a variety of (sometimes competing) student opinions

    The precautionary principle. Between social norms and economic constructs

    No full text
    This paper matches interpretations of the precautionary principle coming from two horizons: economic theory of risk framed in a Bayesian framework, and social heuristic concepts validated by public European and domestic institutions. Although they share some common features, it is shown that concepts and scopes differ a lot. In spite of this difference, analytical economics provide useful insights on key controversial questions for the implementation of this principle as a social norm. Examples concern the reversibility of precautionary measures, the issue of direct application to all individual agents versus reserved application to public bodies, and the burden of bringing appropriate scientific inputs.Cet article confronte les interprĂ©tations proposĂ©es pour le principe de prĂ©caution qui viennent de deux horizons diffĂ©rents : les thĂ©ories Ă©conomiques du risque qui s'inscrivent dans un cadre bayĂ©sien et les repĂšres heuristiques de la doctrine validĂ©e par les institutions europĂ©ennes et françaises. Les traits communs sont mis en Ă©vidence, mais aussi d'importantes diffĂ©rences quant aux concepts et aux contextes d'application. MalgrĂ© ces diffĂ©rences, l'analyse Ă©conomique propose des Ă©clairages utiles sur plusieurs questions controversĂ©es soulevĂ©es par la mise en Ɠuvre du principe de prĂ©caution comme norme sociale. Cela concerne par exemple la rĂ©versibilitĂ© des mesures de prĂ©caution, la question de l'application directe du principe Ă  toute personne ou aux seules autoritĂ©s publiques et le problĂšme de l'imputation de la charge de l'instruction scientifique des hypothĂšses de risque
    • 

    corecore