37,635 research outputs found
Recommended from our members
Eye-tracking the emergence of attentional anchors in a mathematics learning tablet activity
Little is known about micro-processes by which sensorimotor interaction gives rise to conceptual development. Per embodiment theory, these micro-processes are mediated by dynamical attentional structures. Accordingly this study investigated eye-gaze behaviors during engagement in solving tablet-based bimanual manipulation tasks designed to foster proportional reasoning. Seventy-six elementary- and vocational-school students (9-15 yo) participated in individual task-based clinical interviews. Data gathered included action-logging, eye-tracking, and videography. Analyses revealed the emergence of stable eye-path gaze patterns contemporaneous with first enactments of effective manipulation and prior to verbal articulations of manipulation strategies. Characteristic gaze patterns included consistent or recurring attention to screen locations that bore non-salient stimuli or no stimuli at all yet bore invariant geometric relations to dynamical salient features. Arguably, this research validates empirically hypothetical constructs from constructivism, particularly reflective abstraction
Recommended from our members
Exposing piaget's scheme: Empirical evidence for the ontogenesis of coordination in learning a mathematical concept
The combination of two methodological resources-natural-user interfaces (NUI) and multimodal learning analytics (MMLA)-is creating opportunities for educational researchers to empirically evaluate seminal models for the hypothetical emergence of concepts from situated sensorimotor activity. 76 participants (9-14 yo) solved tablet-based non-symbolic manipulation tasks designed to foster grounded meanings for the mathematical concept of proportional equivalence. Data gathered in task-based semi-structured clinical interviews included action logging, eye-gaze tracking, and videography. Successful task performance coincided with spontaneous appearance of stable dynamical gaze-path patterns soon followed by multimodal articulation of strategy. Significantly, gaze patterns included uncued non-salient screen locations. We present cumulative results to argue that these 'attentional anchors' mediated participants' problem solving. We interpret the findings as enabling us to revisit, support, refine, and elaborate on central claims of Piaget's theory of genetic epistemology and in particular his insistence on the role of situated motor-action coordination in the process of reflective abstraction
Ecological active vision: four bio-inspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot
Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture ("BITPIC") to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob "objects." The results show that the architecture solves the problems, and hence the tasks, very ef?ciently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions
The perils of automaticity
Classical theories of skill acquisition propose that automatization (i.e., performance requires progressively less attention as experience is acquired) is a defining characteristic of expertise in a variety of domains (e.g., Fitts & Posner, 1967). Automaticity is believed to enhance smooth and efficient skill execution by allowing performers to focus on strategic elements of performance rather than on the mechanical details that govern task implementation (Williams & Ford, 2008). By contrast, conscious processing (i.e., paying conscious attention to oneâs action during motor execution) has been found to disrupt skilled movement and performance proficiency (e.g., Beilock & Carr, 2001). On the basis of this evidence, researchers have tended to extol the virtues of automaticity. However, few researchers have considered the wide range of empirical evidence which indicates that highly automated behaviors can, on occasion, lead to a series of errors that may prove deleterious to skilled performance. Therefore, the purpose of the current paper is to highlight the perils, rather than the virtues, of automaticity. We draw on Reasonâs (1990) classification scheme of everyday errors to show how an overreliance on automated procedures may lead to 3 specific performance errors (i.e., mistakes, slips, and lapses) in a variety of skill domains (e.g., sport, dance, music). We conclude by arguing that skilled performance requires the dynamic interplay of automatic processing and conscious processing in order to avoid performance errors and to meet the contextually contingent demands that characterize competitive environments in a range of skill domains
Pointing as an Instrumental Gesture : Gaze Representation Through Indication
The research of the first author was supported by a Fulbright Visiting Scholar Fellowship and developed in 2012 during a period of research visit at the University of Memphis.Peer reviewedPublisher PD
Learning to look in different environments: an active-vision model which learns and readapts visual routines
One of the main claims of the active vision framework is that finding data on the basis of task requirements is more efficient than reconstructing the whole scene by performing a complete visual scan. To be successful, this approach requires that agents learn visual routines to direct overt attention to locations with the information needed to accomplish the task. In ecological conditions, learning such visual routines is difficult due to the partial observability of the world, the changes in the environment, and the fact that learning signals might be indirect. This paper uses a reinforcement-learning actor-critic model to study how visual routines can be formed, and then adapted when the environment changes, in a system endowed with a controllable gaze and reaching capabilities. The tests of the model show that: (a) the autonomously-developed visual routines are strongly dependent on the task and the statistical properties of the environment; (b) when the statistics of the environment change, the performance of the system remains rather stable thanks to the re-use of previously discovered visual routines while the visual exploration policy remains for long time sub-optimal. We conclude that the model has a robust behaviour but the acquisition of an optimal visual exploration policy is particularly hard given its complex dependence on statistical properties of the environment, showing another of the difficulties that adaptive active vision agents must face
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Inside the brain of an elite athlete: The neural processes that support high achievement in sports
Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
- âŠ