5,741 research outputs found
The role of answer fluency and perceptual fluency in the monitoring and control of reasoning: Reply to Alter, Oppenheimer, and Epley
In this reply, we provide an analysis of Alter et al. (2013) response to our earlier paper (Thompson et al., 2013). In that paper, we reported difficulty in replicating Alter, Oppenheimer, Epley, and Eyreâs (2007) main finding, namely that a sense of disfluency produced by making stimuli difficult to perceive, increased accuracy on a variety of reasoning tasks. Alter, Oppenheimer, and Epley (2013) argue that we misunderstood the meaning of accuracy on these tasks, a claim that we reject. We argue and provide evidence that the tasks were not too difficult for our populations (such that no amount of âmetacognitive uneaseâ would promote correct responding) and point out that in many cases performance on our tasks was well above chance or on a par with Alter et al.âs (2007) participants. Finally, we reiterate our claim that the distinction between answer fluency (the ease with which an answer comes to mind) and perceptual fluency (the ease with which a problem can be read) is genuine, and argue that Thompson et al. (2013) provided evidence that these are distinct factors that have different downstream effects on cognitive processe
Conceptually driven and visually rich tasks in texts and teaching practice: the case of infinite series
The study we report here examines parts of what Chevallard calls the institutional dimension of the studentsâ learning experience of a relatively under-researched, yet crucial, concept in Analysis, the concept of infinite series. In particular, we examine how the concept is introduced to students in texts and in teaching practice. To this purpose, we employ Duval's Theory of Registers of Semiotic Representation towards the analysis of 22 texts used in Canada and UK post-compulsory courses. We also draw on interviews with in-service teachers and university lecturers in order to discuss briefly teaching practice and some of their teaching suggestions. Our analysis of the texts highlights that the presentation of the concept is largely a-historical, with few graphical representations, few opportunities to work across different registers (algebraic, graphical, verbal), few applications or intra-mathematical references to the concept's significance and few conceptually driven tasks that go beyond practising with the application of convergence tests and prepare students for the complex topics in which the concept of series is implicated. Our preliminary analysis of the teacher interviews suggests that pedagogical practice often reflects the tendencies in the texts. Furthermore, the interviews with the university lecturers point at the pedagogical potential of: illustrative examples and evocative visual representations in teaching; and, student engagement with systematic guesswork and writing explanatory accounts of their choices and applications of convergence tests
Hedging predictions in machine learning
Recent advances in machine learning make it possible to design efficient
prediction algorithms for data sets with huge numbers of parameters. This paper
describes a new technique for "hedging" the predictions output by many such
algorithms, including support vector machines, kernel ridge regression, kernel
nearest neighbours, and by many other state-of-the-art methods. The hedged
predictions for the labels of new objects include quantitative measures of
their own accuracy and reliability. These measures are provably valid under the
assumption of randomness, traditional in machine learning: the objects and
their labels are assumed to be generated independently from the same
probability distribution. In particular, it becomes possible to control (up to
statistical fluctuations) the number of erroneous predictions by selecting a
suitable confidence level. Validity being achieved automatically, the remaining
goal of hedged prediction is efficiency: taking full account of the new
objects' features and other available information to produce as accurate
predictions as possible. This can be done successfully using the powerful
machinery of modern machine learning.Comment: 24 pages; 9 figures; 2 tables; a version of this paper (with
discussion and rejoinder) is to appear in "The Computer Journal
Math empowerment: a multidisciplinary example to engage primary school students in learning mathematics
This paper describes an educational project conducted in a primary school in Italy (Scuola Primaria Alessandro Manzoni at Mulazzano, near to Milan). The school requested our collaboration to help improve upon the results achieved on the National Tests for Mathematics, in which students, aged 7, registered performances lower than the national average the past year.
From January to June, 2016, we supported teachers, providing them with information, tools and methods to increase their pupilsâ curiosity and passion for mathematics. Mixing our different experiences and competences (instructional design and gamification, information technologies and psychology) we have tried to provide a broader spectrum of parameters, tools and keys to understand how to achieve an inclusive approach that is âpersonalisedâ to each student.
This collaboration with teachers and students allowed us to draw interesting observations about learning styles, pointing out the negative impact that standardized processes and instruments can have on the selfâesteem and, consequently, on student performance.
The goal of this programme was to find the right learning levers to intrigue and excite students in mathematical concepts and their applications.
Our hypothesis is that, by considering the learning of mathematics as a continuous process, in which students develop freely through their own experiments, observations, involvement and curiosity, students can achieve improved results on the National Tests (INVALSI).
This paper includes results of a survey conducted by children ââAbout Me and Mathematicsâ
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a modelâs development, not just explanations of the model itself
Motion as manipulation: Implementation of motion and force analogies by event-file binding and action planning\ud
Tool improvisation analogies are a special case of motion and force analogies that appear to be implemented pre-conceptually, in many species, by event-file binding and action planning. A detailed reconstruction of the analogical reasoning steps involved in Rutherford's and Bohr's development of the first quantized-orbit model of atomic structure is used to show that human motion and force analogies generally can be implemented by the event-file binding and action planning mechanism. Predictions that distinguish this model from competing concept-level models of analogy are discussed, available data pertaining to them are reviewed, and further experimental tests are proposed
- âŠ