171 research outputs found

    A Coordination-Theoretic Approach to Understanding Process Differences

    Get PDF
    Supporting human collaboration is challenging partly because of variability in how people work. Even within a single organization, there can be many variants of processes which have the same purpose. When distinct organizations must work together, the differences can be especially large, baffling and disruptive. Coordination theory provides a method and vocabulary for modeling complex collaborative activities in a way that makes both the similarities and differences between them more visible. We illustrate this, in this paper, by analyzing three engineering change management processes and demonstrating how our method compactly highlights the substantial commonalities and precise differences between what are on first glance are extremely divergent approaches

    Division of labour and sharing of knowledge for synchronous collaborative information retrieval

    Get PDF
    Synchronous collaborative information retrieval (SCIR) is concerned with supporting two or more users who search together at the same time in order to satisfy a shared information need. SCIR systems represent a paradigmatic shift in the way we view information retrieval, moving from an individual to a group process and as such the development of novel IR techniques is needed to support this. In this article we present what we believe are two key concepts for the development of effective SCIR namely division of labour (DoL) and sharing of knowledge (SoK). Together these concepts enable coordinated SCIR such that redundancy across group members is reduced whilst enabling each group member to benefit from the discoveries of their collaborators. In this article we outline techniques from state-of-the-art SCIR systems which support these two concepts, primarily through the provision of awareness widgets. We then outline some of our own work into system-mediated techniques for division of labour and sharing of knowledge in SCIR. Finally we conclude with a discussion on some possible future trends for these two coordination techniques

    English-learning one- to two-year-olds do not show a consonant bias in word learning.

    Get PDF
    Following the proposal that consonants are more involved than vowels in coding the lexicon (Nespor, Peña & Mehler, 2003), an early lexical consonant bias was found from age 1;2 in French but an equal sensitivity to consonants and vowels from 1;0 to 2;0 in English. As different tasks were used in French and English, we sought to clarify this ambiguity by using an interactive word-learning study similar to that used in French, with British-English-learning toddlers aged 1;4 and 1;11. Children were taught two CVC labels differing on either a consonant or vowel and tested on their pairing of a third object named with one of the previously taught labels, or part of them. In concert with previous research on British-English toddlers, our results provided no evidence of a general consonant bias. The language-specific mechanisms explaining the differential status for consonants and vowels in lexical development are discussed

    Holistan Revisited: Demonstrating Agent- and Knowledge-Based Capabilities for Future Coalition Military Operations

    No full text
    As a fundamental research program, the International Technology Alliance (ITA) aims to explore innovative solutions to some of the challenges confronting US/UK coalition military forces in an era of network-enabled operations. In order to demonstrate some of the scientific and technical achievements of the ITA research program, we have developed a detailed military scenario that features the involvement of US and UK coalition forces in a large-scale humanitarian-assistance/disaster relief (HA/DR) effort. The scenario is based in a fictitious country called Holistan, and it draws on a number of previous scenario specification efforts that have been undertaken as part of the ITA. In this paper we provide a detailed description of the scenario and review the opportunities for technology demonstration in respect of a number of ITA research focus areas

    Predicting mental imagery based BCI performance from personality, cognitive profile and neurophysiological patterns

    Get PDF
    Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) allow their users to send commands to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy— EEG), which is processed while they perform specific mental tasks. While very promising, MI-BCIs remain barely used outside laboratories because of the difficulty encountered by users to control them. Indeed, although some users obtain good control performances after training, a substantial proportion remains unable to reliably control an MI-BCI. This huge variability in user-performance led the community to look for predictors of MI-BCI control ability. However, these predictors were only explored for motor-imagery based BCIs, and mostly for a single training session per subject. In this study, 18 participants were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2 of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships between the participants’ BCI control performances and their personality, cognitive profile and neurophysiological markers were explored. While no relevant relationships with neurophysiological markers were found, strong correlations between MI-BCI performances and mental-rotation scores (reflecting spatial abilities) were revealed. Also, a predictive model of MI-BCI performance based on psychometric questionnaire scores was proposed. A leave-one-subject-out cross validation process revealed the stability and reliability of this model: it enabled to predict participants’ performance with a mean error of less than 3 points. This study determined how users’ profiles impact their MI-BCI control ability and thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of each user

    A methodological investigation of the Intermodal Preferential Looking paradigm: Methods of analyses, picture selection and data rejection criteria

    Get PDF
    The Intermodal Preferential Looking paradigm provides a sensitive measure of a child's online word comprehension. To complement existing recommendations (Fernald, Zangl, Portillo, & Marchman, 2008), the present study evaluates the impact of experimental noise generated by two aspects of the visual stimuli on the robustness of familiar word recognition with and without mispronunciations: the presence of a central fixation point and the level of visual noise in the pictures (as measured by luminance saliency). Twenty-month-old infants were presented with a classic word recognition IPL procedure in 3 conditions: without a fixation stimulus (No Fixation - noisiest condition), with a fixation stimulus before trial onset (Fixation, intermediate), and with a fixation stimulus, a neutral background and equally salient images (Fixation Plus - least noisy). Data were systematically analyzed considering a range of data selection criteria and dependent variables (proportion of looking time towards the target, longest look, and time-course analysis). Critically, the expected pronunciation and naming interaction was only found in the Fixation Plus condition. We discuss the impact of data selection criteria and the dependent variable choice on the modulation of these effects across the different conditions

    Differential processing of consonants and vowels in the auditory modality: A cross-linguistic study

    Get PDF
    International audienceFollowing the proposal by Nespor, Peña, and Mehler (2003) that consonants are more important in constraining lexical access than vowels, New, Araújo, and Nazzi (2008) demonstrated in a visual priming experiment that primes sharing consonants (jalu-JOLI) facilitate lexical access while primes sharing vowels do not (vobi-JOLI). The present study explores if this asymmetry can be extended to the auditory modality and whether language input plays a critical role as developmental studies suggest. Our experiments tested French and English as target languages and showed that consonantal information facilitated lexical decision to a greater extent than vocalic information, suggesting that the consonant advantage is independent of the language’s distributional properties. However, vowels are also facilitatory, in specific cases, with iambic English CVCV or French CVCV words. This effect is related to the preservation of the rhyme between the prime and the target (here, the final vowel), suggesting that the rhyme, in addition to consonant information and consonant skeleton information is an important unit in auditory phonological priming and spoken word recognition

    Low-Level Information and High-Level Perception: The Case of Speech in Noise

    Get PDF
    Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities
    corecore