623 research outputs found

    The Immune System: the ultimate fractionated cyber-physical system

    Full text link
    In this little vision paper we analyze the human immune system from a computer science point of view with the aim of understanding the architecture and features that allow robust, effective behavior to emerge from local sensing and actions. We then recall the notion of fractionated cyber-physical systems, and compare and contrast this to the immune system. We conclude with some challenges.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Understanding drawing: a cognitive account of observational process

    Get PDF
    This thesis contributes to theorising observational drawing from a cognitive perspective. Our current understanding of drawing is developing rapidly through artistic and scientific enquiry. However, it remains fragmented because the frames of reference of those modes of enquiry do not coincide. Therefore, the foundations for a truly interdisciplinary understanding of observational drawing are still inceptive. This thesis seeks to add to those foundations by bridging artistic and scientific perspectives on observational process and the cognitive aptitudes underpinning it. The project is based on four case studies of experienced artists drawing processes, with quantitative and qualitative data gathered: timing of eye and hand movements, and artists verbal reports. The data sets are analysed with a generative approach, using behavioural and protocol analysis methods to yield comparative models that describe cognitive strategies for drawing. This forms a grounded framework that elucidates the cognitive activities and competences observational process entails. Cognitive psychological theory is consulted to explain the observed behaviours, and the combined evidence is applied to understanding apparent discrepancies in existing accounts of drawing. In addition, the use of verbal reporting methods in drawing studies is evaluated. The study observes how drawing process involves a segregation of activities that enables efficient use of limited and parametrically constrained cognitive resources. Differing drawing strategies are shown to share common key characteristics; including a staged use of selective visual attention, and the capacity to temporarily postpone critical judgement in order to engage fully in periods of direct perception and action. The autonomy and regularity of those activities, demonstrated by the artists studied, indicate that drawing ability entails tacit self‐knowledge concerning the cognitive and perceptual capacities described in this thesis. This thesis presents drawing as a skill that involves strategic use of visual deconstruction, comparison, analogical transfer and repetitive cycles of construction, evaluation and revision. I argue that drawing skill acquisition and transfer can be facilitated by the elucidation of these processes. As such, this framework for describing and understanding drawing is offered to those who seek to understand, learn or teach observational practice, and to those who are taking a renewed interest in drawing as a tool for thought

    Detecting Physical Collaborations in a Group Task Using Body-Worn Microphones and Accelerometers

    Get PDF
    This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chest-worn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations

    Group activity recognition using belief propagation for wearable devices

    Full text link

    Prediction during simultaneous interpreting:Evidence from the visual-world paradigm

    Get PDF
    We report the results of an eye-tracking study which used the Visual World Paradigm (VWP) to investigate the time-course of prediction during a simultaneous interpreting task. Twenty-four L1 French professional conference interpreters and twenty-four L1 French professional translators untrained in simultaneous interpretation listened to sentences in English and interpreted them simultaneously into French while looking at a visual scene. Sentences contained a highly predictable word (e.g., The dentist asked the man to open his mouth a little wider). The visual scene comprised four objects, one of which depicted either the target object (mouth; bouche), an English phonological competitor (mouse; souris), a French phonological competitor (cork; bouchon), or an unrelated word (bone; os). We considered 1) whether interpreters and translators predict upcoming nouns during a simultaneous interpreting task, 2) whether interpreters and translators predict the form of these nouns in English and in French and 3) whether interpreters and translators manifest different predictive behaviour. Our results suggest that both interpreters and translators predict upcoming nouns, but neither group predicts the word-form of these nouns. In addition, we did not find significant differences between patterns of prediction in interpreters and translators. Thus, evidence from the visual-world paradigm shows that prediction takes place in simultaneous interpreting, regardless of training and experience. However, we were unable to establish whether word-form was predicted

    Emerging research directions in computer science : contributions from the young informatics faculty in Karlsruhe

    Get PDF
    In order to build better human-friendly human-computer interfaces, such interfaces need to be enabled with capabilities to perceive the user, his location, identity, activities and in particular his interaction with others and the machine. Only with these perception capabilities can smart systems ( for example human-friendly robots or smart environments) become posssible. In my research I\u27m thus focusing on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments. My work includes research on person tracking, person identication, recognition of pointing gestures, estimation of head orientation and focus of attention, as well as audio-visual scene and activity analysis. Application areas are humanfriendly humanoid robots, smart environments, content-based image and video analysis, as well as safety- and security-related applications. This article gives a brief overview of my ongoing research activities in these areas

    Group Activity Recognition Using Wearable Sensing Devices

    Get PDF
    Understanding behavior of groups in real time can help prevent tragedy in crowd emergencies. Wearable devices allow sensing of human behavior, but the infrastructure required to communicate data is often the first casualty in emergency situations. Peer-to-peer (P2P) methods for recognizing group behavior are necessary, but the behavior of the group cannot be observed at any single location. The contribution is the methods required for recognition of group behavior using only wearable devices

    Student interpreters predict meaning while simultaneously interpreting - even before training

    Get PDF
    Prediction has long been considered advantageous in simultaneous interpreting, as it may allow interpreters to comprehend more rapidly and focus on their own production. However, evidence of prediction in simultaneous interpreting to date is relatively limited. In addition, it is unclear whether training in simultaneous interpreting influences predictive processing during a simultaneous interpreting task. We report on a longitudinal eyetracking study which measured the timing and extent of prediction in students before and after two semesters of training in simultaneous interpreting. The students simultaneously interpreted sentences containing a highly predictable word as they viewed a screen containing four pictures, one of which depicted a highly predictable object. They made predictive eye movements to the highly predictable object both before and after their training in simultaneous interpreting. However, we did not find evidence that training influenced the timing or the magnitude of their prediction

    Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications

    Get PDF
    The present book contains all of the articles that were accepted and published in the Special Issue of MDPI’s journal Mathematics titled "Computational Intelligence and Human–Computer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of human–computer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, human–computer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations
    • 

    corecore