226 research outputs found
Biological learning and artificial intelligence
It was once taken for granted that learning in animals and man could be explained with a simple set of general learning rules, but over the last hundred years, a substantial amount of evidence has been accumulated that points in a quite different direction. In animal learning theory, the laws of learning are no longer considered general. Instead, it has been necessary to explain behaviour in terms of a large set of interacting learning mechanisms and innate behaviours. Artificial intelligence is now on the edge of making the transition from general theories to a view of intelligence that is based on anamalgamate of interacting systems. In the light of the evidence from animal learning theory, such a transition is to be highly desired
Speech Development by Imitation
The Double Cone Model (DCM) is a model
of how the brain transforms sensory input to
motor commands through successive stages of
data compression and expansion. We have
tested a subset of the DCM on speech recognition, production and imitation. The experiments show that the DCM is a good candidate
for an artificial speech processing system that
can develop autonomously. We show that the
DCM can learn a repertoire of speech sounds
by listening to speech input. It is also able to
link the individual elements of speech to sequences that can be recognized or reproduced,
thus allowing the system to imitate spoken
language
First Steps Toward a Computational Theory of Autism
A computational model with three interacting components for context sensitive reinforcement learning, context processing and automation can autonomously learn a focus attention and a shift attention task. The performance of the model is similar to that of normal children, and when a single parameter is changed, the performance on the two tasks approaches that of autistic children
Event Prediction and Object Motion Estimation in the Development of Visual Attention
A model of gaze control is describes that includes mechanisms for predictive control using a forward model and event driven expectations of target behavior. The model roughly undergoes stages similar to those of human infants if the influence of the predictive systems is gradually increased
Cognitive modeling with context sensitive reinforcement learning
We describe how a standard reinforcement learning algorithm can be changed to include a second contextual input that is used to modulate the learning in the original algorithm. The new algorithm takes the context into account during relearning when the previously learned actions are no longer valid. The algorithm was tested on a number of cognitive experiment and shown to reproduce the learning in both a task switching test and in the Wisconsin Card Sorting Test. In addition, the algorithm was able to learn a context sensitive categorization of objects in the Labov experiment
Nobody's Perfect : On Trust in Social Robot Failures
With robots increasingly succeeding in exhibiting more human-like behaviours, humans may be more likely to âforgiveâ their errors and continue to trust them as a result of ascribing higher, more human-like intelligence to them. If an integral aspect of successful HRI is to accurately communicate the competence of a robot, it can be argued that the technical success of the robot in exhibiting human-like behaviour can, in some cases, lead to a failure of the interaction by resulting in misperceived human-like competence. We highlight this through the example of speech in robots, and discuss the implications of failures and their role in HRI design
Testing the Error Recovery Capabilities of Robotic Speech
Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or non-verbal) for the solution. In addition, we want to measure whether the degree of error (slight error vs. severe error) has an impact on the participantsâ behaviour and the robotâs perceived trustworthiness, since making a severe error would affect trust more than a slight error. Participants will be assigned to three groups, where we will vary the degree of accu- racy of the robotâs answers (correct vs. almost right vs. obviously wrong). They will complete ten series of a sequence completion task and rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We also present our thoughts on the implications of potential results
Design and technical construction of a tactile display for sensory feedback in a hand prosthesis system
<p>Abstract</p> <p>Background</p> <p>The users of today's commercial prosthetic hands are not given any conscious sensory feedback. To overcome this deficiency in prosthetic hands we have recently proposed a sensory feedback system utilising a "tactile display" on the remaining amputation residual limb acting as man-machine interface. Our system uses the recorded pressure in a hand prosthesis and feeds back this pressure onto the forearm skin. Here we describe the design and technical solution of the sensory feedback system aimed at hand prostheses for trans-radial/humeral amputees. Critical parameters for the sensory feedback system were investigated.</p> <p>Methods</p> <p>A sensory feedback system consisting of five actuators, control electronics and a test application running on a computer has been designed and built. Firstly, we investigate which force levels were applied to the forearm skin of the user while operating the sensory feedback system. Secondly, we study if the proposed system could be used together with a myoelectric control system. The displacement of the skin caused by the sensory feedback system would generate artefacts in the recorded myoelectric signals. Accordingly, EMG recordings were performed and an analysis of the these are included. The sensory feedback system was also preliminarily evaluated in a laboratory setting on two healthy non-amputated test subjects with a computer generating the stimuli, with regards to spatial resolution and force discrimination.</p> <p>Results</p> <p>We showed that the sensory feedback system generated approximately proportional force to the angle of control. The system can be used together with a myoelectric system as the artefacts, generated by the actuators, were easily removed using a simple filter. Furthermore, the application of the system on two test subjects showed that they were able to discriminate tactile sensation with regards to spatial resolution and level of force.</p> <p>Conclusions</p> <p>The results of these initial experiments in non-amputees indicate that the proposed tactile display, in its simple form, can be used to relocate tactile input from an artificial hand to the forearm and that the system can coexist with a myoelectric control systems. The proposed system may be a valuable addition to users of myoelectric prosthesis providing conscious sensory feedback during manipulation of objects.</p
- âŠ