171 research outputs found

    Mental imagery of object motion in weightlessness

    Get PDF
    Mental imagery represents a potential countermeasure for sensorimotor and cognitive dysfunctions due to spaceflight. It might help train people to deal with conditions unique to spaceflight. Thus, dynamic interactions with the inertial motion of weightless objects are only experienced in weightlessness but can be simulated on Earth using mental imagery. Such training might overcome the problem of calibrating fine-grained hand forces and estimating the spatiotemporal parameters of the resulting object motion. Here, a group of astronauts grasped an imaginary ball, threw it against the ceiling or the front wall, and caught it after the bounce, during pre-flight, in-flight, and post-flight experiments. They varied the throwing speed across trials and imagined that the ball moved under Earth's gravity or weightlessness. We found that the astronauts were able to reproduce qualitative differences between inertial and gravitational motion already on ground, and further adapted their behavior during spaceflight. Thus, they adjusted the throwing speed and the catching time, equivalent to the duration of virtual ball motion, as a function of the imaginary 0 g condition versus the imaginary 1 g condition. Arm kinematics of the frontal throws further revealed a differential processing of imagined gravity level in terms of the spatial features of the arm and virtual ball trajectories. We suggest that protocols of this kind may facilitate sensorimotor adaptation and help tuning vestibular plasticity in-flight, since mental imagery of gravitational motion is known to engage the vestibular cortex

    Automatic Classification of Text Databases through Query Probing

    Get PDF
    Many text databases on the web are "hidden" behind search interfaces, and their documents are only accessible through querying. Search engines typically ignore the contents of such search-only databases. Recently, Yahoo-like directories have started to manually organize these databases into categories that users can browse to find these valuable resources. We propose a novel strategy to automate the classification of search-only text databases. Our technique starts by training a rule-based document classifier, and then uses the classifier's rules to generate probing queries. The queries are sent to the text databases, which are then classified based on the number of matches that they produce for each query. We report some initial exploratory experiments that show that our approach is promising to automatically characterize the contents of text databases accessible on the web.Comment: 7 pages, 1 figur

    Approximate String Joins in a Database (Almost) for Free -- Erratum

    Get PDF
    In [GIJ+01a, GIJ+01b] we described how to use q-grams in an RDBMS to perform approximate string joins. We also showed how to implement the approximate join using plain SQL queries. Specifically, we described three filters, count filter, position filter, and length filter, which can be used to execute efficiently the approximate join. The intuition behind the count filter was that strings that are similar have many q-grams in common. In particular, two strings s1 and s2 can have up to max max {|s1|, |s2|} + q - 1 common q-grams. When s1 = s2, they have exactly that many q-grams in common. When s1 and s2 are within edit distance k, they share at least (max {|s1|, |s2|} + q - 1) - kq q-grams, since kq is the maximum numbers of q-grams that can be affected by k edit distance operations

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Competition Breeds Desire

    Full text link
    Desire spurs competition; here we explore whether the converse is also true. In one study, female quartets (N = 58) completed anagrams, with the winner to receive compact speakers; controls anagrammed without competition. In the other study, female quartets (N = 74) described their ideal first date to a male judge, who chose the best description; controls read to him others' date descriptions without competition. In both studies, creating competition increased desire and altered how much participants wanted, but not how much they liked, the competed-for thing. Competition may activate a general “wanting system,” producing overvaluing in settings from stock markets to partner selection

    When to elicit feedback in dialogue: Towards a model based on the information needs of speakers

    Get PDF
    Buschmeier H, Kopp S. When to elicit feedback in dialogue: Towards a model based on the information needs of speakers. In: Proceedings of the 14th International Conference on Intelligent Virtual Agents. Boston, MA, USA; 2014: 71-80.Communicative feedback in dialogue is an important mechanism that helps interlocutors coordinate their interaction. Listeners pro-actively provide feedback when they think that it is important for the speaker to know their mental state, and speakers pro-actively seek listener feedback when they need information on whether a listener perceived, understood or accepted their message. This paper presents first steps towards a model for enabling attentive speaker agents to determine when to elicit feedback based on continuous assessment of their information needs about a user's listening state

    An architecture for fluid real-time conversational agents: Integrating incremental output generation and input processing

    Get PDF
    Kopp S, van Welbergen H, Yaghoubzadeh R, Buschmeier H. An architecture for fluid real-time conversational agents: Integrating incremental output generation and input processing. Journal on Multimodal User Interfaces. 2014;8:97-108.Embodied conversational agents still do not achieve the fluidity and smoothness of natural conversational interaction. One main reason is that current system often respond with big latencies and in inflexible ways. We argue that to overcome these problems, real-time conversational agents need to be based on an underlying architecture that provides two essential features for fast and fluent behavior adaptation: a close bi-directional coordination between input processing and output generation, and incrementality of processing at both stages. We propose an architectural framework for conversational agents [Artificial Social Agent Platform (ASAP)] providing these two ingredients for fluid real-time conversation. The overall architectural concept is described, along with specific means of specifying incremental behavior in BML and technical implementations of different modules. We show how phenomena of fluid real- time conversation, like adapting to user feedback or smooth turn-keeping, can be realized with ASAP and we describe in detail an example real-time interaction with the implemented system
    • 

    corecore