174 research outputs found
Making tools and making sense: complex, intentional behaviour in human evolution
Stone tool-making is an ancient and prototypically human skill characterized by multiple levels of intentional organization. In a formal sense, it displays surprising similarities to the multi-level organization of human language. Recent functional brain imaging studies of stone tool-making similarly demonstrate overlap with neural circuits involved in language processing. These observations consistent with the hypothesis that language and tool-making share key requirements for the construction of hierarchically structured action sequences and evolved together in a mutually reinforcing way
The evolutionary neuroscience of tool making
The appearance of the first intentionally modified stone tools over 2.5 million years ago marked a watershed in human evolutionary history, expanding the human adaptive niche and initiating a trend of technological elaboration that continues to the present day. However, the cognitive foundations of this behavioral revolution remain controversial, as do its implications for the nature and evolution of modern human technological abilities. Here we shed new light on the neural and evolutionary foundations of human tool making skill by presenting functional brain imaging data from six inexperienced subjects learning to make stone tools of the kind found in the earliest archaeological record. Functional imaging of this complex, naturalistic task was accomplished through positron emission tomography with the slowly decaying radiological tracer (18)flouro-2-deoxyglucose. Results show that simple stone tool making is supported by a mosaic of primitive and derived parietofrontal perceptual-motor systems, including recently identified human specializations for representation of the central visual field and perception of three-dimensional form from motion. In the naive tool makers reported here, no activation was observed in prefrontal executive cortices associated with strategic action planning or in inferior parietal cortex thought to play a role in the representation of everyday tool use skills. We conclude that uniquely human capacities for sensorimotor adaptation and affordance perception, rather than abstract conceptualization and planning, were central factors in the initial stages of human technological evolution. The appearance of the first intentionally modified stone tools over 2.5 million years ago marked a watershed in human evolutionary history, expanding the human adaptive niche and initiating a trend of technological elaboration that continues to the present day. However, the cognitive foundations of this behavioral revolution remain controversial, as do its implications for the nature and evolution of modern human technological abilities. Here we shed new light on the neural and evolutionary foundations of human tool making skill by presenting functional brain imaging data from six inexperienced subjects learning to make stone tools of the kind found in the earliest archaeological record. Functional imaging of this complex, naturalistic task was accomplished through positron emission tomography with the slowly decaying radiological tracer (18)flouro-2-deoxyglucose. Results show that simple stone tool making is supported by a mosaic of primitive and derived parietofrontal perceptual-motor systems, including recently identified human specializations for representation of the central visual field and perception of three-dimensional form from motion. In the naive tool makers reported here, no activation was observed in prefrontal executive cortices associated with strategic action planning or in inferior parietal cortex thought to play a role in the representation of everyday tool use skills. We conclude that uniquely human capacities for sensorimotor adaptation and affordance perception, rather than abstract conceptualization and planning, were central factors in the initial stages of human technological evolution
The measurement, evolution, and neural representation of action grammars of human behavior
Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition
Lilia, A Showcase for Fast Bootstrap of Conversation-Like Dialogues Based on a Goal-Oriented System
International audienceRecently many works have proposed to cast human-machine interaction in a sentence generation scheme. Neural networks models can learn how to generate a probable sentence based on the user's statement along with a partial view of the dialogue history. While appealing to some extent, these approaches require huge training sets of general-purpose data and lack a principled way to intertwine language generation with information retrieval from back-end resources to fuel the dialogue with actualised and precise knowledge. As a practical alternative, in this paper, we present Lilia, a showcase for fast bootstrap of conversation-like dialogues based on a goal-oriented system. First, a comparison of goal-oriented and conversational system features is led, then a conversion process is described for the fast bootstrap of a new system, finalised with an on-line training of the system's main components. Lilia is dedicated to a chitchat task, where speakers exchange viewpoints on a displayed image while trying collaboratively to derive its author's intention. Evaluations with user trials showed its efficiency in a realistic setup
A major electronics upgrade for the H.E.S.S. Cherenkov telescopes 1-4
The High Energy Stereoscopic System (H.E.S.S.) is an array of imaging
atmospheric Cherenkov telescopes (IACTs) located in the Khomas Highland in
Namibia. It consists of four 12-m telescopes (CT1-4), which started operations
in 2003, and a 28-m diameter one (CT5), which was brought online in 2012. It is
the only IACT system featuring telescopes of different sizes, which provides
sensitivity for gamma rays across a very wide energy range, from ~30 GeV up to
~100 TeV. Since the camera electronics of CT1-4 are much older than the one of
CT5, an upgrade is being carried out; first deployment was in 2015, full
operation is planned for 2016. The goals of this upgrade are threefold:
reducing the dead time of the cameras, improving the overall performance of the
array and reducing the system failure rate related to aging. Upon completion,
the upgrade will assure the continuous operation of H.E.S.S. at its full
sensitivity until and possibly beyond the advent of CTA. In the design of the
new components, several CTA concepts and technologies were used and are thus
being evaluated in the field: The upgraded read-out electronics is based on the
NECTAR readout chips; the new camera front- and back-end control subsystems are
based on an FPGA and an embedded ARM computer; the communication between
subsystems is based on standard Ethernet technologies. These hardware solutions
offer good performance, robustness and flexibility. The design of the new
cameras is reported here.Comment: Proceedings of the 34th International Cosmic Ray Conference, 30 July-
6 August, 2015, The Hague, The Netherland
Appealing avatars from 3D body scans: Perceptual effects of stylization
Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data. However, negative reactions to some realistic computer generated humans suggest that this approach might not always provide the most appealing results. Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%
Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer
Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents
Lateral specialization in unilateral spatial neglect : a cognitive robotics model
In this paper, we present the experimental results of an embodied cognitive robotic approach for modelling the human cognitive deficit known as unilateral spatial neglect (USN). To this end, we introduce an artificial neural network architecture designed and trained to control the spatial attentional focus of the iCub robotic platform. Like the human brain, the architecture is divided into two hemispheres and it incorporates bio-inspired plasticity mechanisms, which allow the development of the phenomenon of the specialization of the right hemisphere for spatial attention. In this study, we validate the model by replicating a previous experiment with human patients affected by the USN and numerical results show that the robot mimics the behaviours previously exhibited by humans. We also simulated recovery after the damage to compare the performance of each of the two hemispheres as additional validation of the model. Finally, we highlight some possible advantages of modelling cognitive dysfunctions of the human brain by means of robotic platforms, which can supplement traditional approaches for studying spatial impairments in humans
Keeping an eye on the violinist: motor experts show superior timing consistency in a visual perception task
Common coding theory states that perception and action may reciprocally induce each other. Consequently, motor expertise should map onto perceptual consistency in specific tasks such as predicting the exact timing of a musical entry. To test this hypothesis, ten string musicians (motor experts), ten non-string musicians (visual experts), and ten non-musicians were asked to watch progressively occluded video recordings of a first violinist indicating entries to fellow members of a string quartet. Participants synchronised with the perceived timing of the musical entries. Results revealed significant effects of motor expertise on perception. Compared to visual experts and non-musicians, string players not only responded more accurately, but also with less timing variability. These findings provide evidence that motor experts’ consistency in movement execution—a key characteristic of expert motor performance—is mirrored in lower variability in perceptual judgements, indicating close links between action competence and perception
- …