95 research outputs found

    7th TĂĽbingen Perception Conference: TWK 2004

    No full text

    Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique

    Get PDF
    This thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions.Cette thèse s'intéresse à un problème logique dont les enjeux théoriques et pratiques sont multiples. De manière simple, il peut être présenté ainsi : imaginez que vous êtes dans un labyrinthe, dont vous connaissez toutes les routes menant à chacune des portes de sortie. Derrière l'une de ces portes se trouve un trésor, mais vous n'avez le droit d'ouvrir qu'une seule porte. Un vieil homme habitant le labyrinthe connaît la bonne sortie et se propose alors de vous aider à l'identifier. Pour cela, il vous indiquera la direction à prendre à chaque intersection. Malheureusement, cet homme ne parle pas votre langue, et les mots qu'il utilise pour dire ``droite'' ou ``gauche'' vous sont inconnus. Est-il possible de trouver le trésor et de comprendre l'association entre les mots du vieil homme et leurs significations ? Ce problème, bien qu'en apparence abstrait, est relié à des problématiques concrètes dans le domaine de l'interaction homme-machine. Remplaçons le vieil homme par un utilisateur souhaitant guider un robot vers une sortie spécifique du labyrinthe. Ce robot ne sait pas en avance quelle est la bonne sortie mais il sait où se trouvent chacune des portes et comment s'y rendre. Imaginons maintenant que ce robot ne comprenne pas a priori le langage de l'humain; en effet, il est très difficile de construire un robot à même de comprendre parfaitement chaque langue, accent et préférence de chacun. Il faudra alors que le robot apprenne l'association entre les mots de l'utilisateur et leur sens, tout en réalisant la tâche que l'humain lui indique (i.e.trouver la bonne porte). Une autre façon de décrire ce problème est de parler d'auto-calibration. En effet, le résoudre reviendrait à créer des interfaces ne nécessitant pas de phase de calibration car la machine pourrait s'adapter,automatiquement et pendant l'interaction, à différentes personnes qui ne parlent pas la même langue ou qui n'utilisent pas les mêmes mots pour dire la même chose. Cela veut aussi dire qu'il serait facile de considérer d’autres modalités d'interaction (par exemple des gestes, des expressions faciales ou des ondes cérébrales). Dans cette thèse, nous présentons une solution à ce problème. Nous appliquons nos algorithmes à deux exemples typiques de l'interaction homme robot et de l'interaction cerveau machine: une tâche d'organisation d'une série d'objets selon les préférences de l'utilisateur qui guide le robot par la voix, et une tâche de déplacement sur une grille guidé par les signaux cérébraux de l'utilisateur. Ces dernières expériences ont été faites avec des utilisateurs réels. Nos résultats démontrent expérimentalement que notre approche est fonctionnelle et permet une utilisation pratique d’une interface sans calibration préalable

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    The Aha! Experience of Spatial Reorientation

    Get PDF
    The experience of spatial re-orientation is investigated as an instance of the wellknown phenomenon of the Aha! moment. The research question is: What are the visuospatial conditions that are most likely to trigger the spatial Aha! experience? The literature suggests that spatial re-orientation relies mainly on the geometry of the environment and a visibility graph analysis is used to quantify the visuospatial information. Theories from environmental psychology point towards two hypotheses. The Aha! experience may be triggered by a change in the amount of visual information, described by the isovist properties of area and revelation, or by a change in the complexity of the visual information associated with the isovist properties of clustering coefficient and visual control. Data from participants’ exploratory behaviour and EEG recordings are collected during wayfinding in virtual reality urban environments. Two types of events are of interest here: (a) sudden changes of the visuospatial information preceding subjects' response to investigate changes in EEG power; and (b) participants brain dynamics (Aha! effect) just before the response to examine differences in isovist values at this location. Research on insights, time-frequency analysis of the P3 component and findings from navigation and orientation studies suggest that the spatial Aha! experience may be reflected by: a parietal alpha power decrease associated with the switch of the representation and a frontocentral theta increase indexing spatial processing during decision-making. Single-trial time-frequency analysis is used to classify trials into two conditions based on the alpha/theta power differences between a 3s time-period before participants’ response and a time-period of equal duration before that. Behavioural results show that participants are more likely to respond at locations with low values of clustering coefficient and high values of visual control. The EEG analysis suggests that the alpha decrease/theta increase condition occurs at locations with significantly lower values of clustering coefficient and higher values of visual control. Small and large decreases in clustering coefficient, just before the response, are associated with significant differences in delta/theta power. The values of area and revelation do not show significant differences. Both behavioural and EEG results suggest that the Aha! experience of re-orientation is more likely to be triggered by a change in the complexity of the visual-spatial environment rather than a change in the amount, as measured by the relevant isovist properties

    Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    No full text
    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects’ recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI

    The cognitive representation of the large-scale environment

    Get PDF
    This thesis is concerned with the processes involved in the acquisition and use of cognitive representations of the large-scale environment, or 'cognitive mapping'. The first half of the thesis reviews relevant literature in three main sections. Firstly, the historical roots of the subject are described in chapters on early investigations of wayfinding and orientation, theoretical models of behaviour incorporating the concept of subjective knowledge and multidisciplinary studies of environmental images. Secondly, studies of group differences in cognitive mapping and initial theoretical frameworks are reviewed. Finally, the current state of research evidence is assessed in relation to four research areas. These concern methodological issues, the structure of internal representations, the process of acquiring new representations and individual differences in cognitive mapping.The remainder of the thesis reports and discusses four experimental studies of issues which were judged to be inadequately researched on the basis of the literature review. The first compared the utility of freehand sketch-mapping and three-dimensional modelling with educated, adult subjects. The second investigated the rate of acquisition of cognitive maps, particularly during the first days of environmental experience; using a structured mapping task. Objective accuracy, subjective ratings of accuracy and recall order were examined in relation to building usage and spatial experience. The third experiment compared artificial map learning with spatial relations ability, visual imagery ratings and everyday map usage. Additionally, the effect upon learning of stimulus mode (map or verbal list), response mode and stimulus-response mode compatibility was measured. The final experiment compared performance upon the 'real-life' mapping task of the second study with the map learning and spatial ability measures used in the third study. Evidence was found that cognitive mapping. spatial ability and attitudes to navigational problems are positively related. It was concluded that future work should emphasize the process of cognitive mapping and the relationship between map form and practical needs
    • …
    corecore