245,684 research outputs found

    Post-training load-related changes of auditory working memory: An EEG study

    Get PDF
    Working memory (WM) refers to the temporary retention and manipulation of information, and its capacity is highly susceptible to training. Yet, the neural mechanisms that allow for increased performance under demanding conditions are not fully understood. We expected that post-training efficiency in WM performance modulates neural processing during high load tasks. We tested this hypothesis, using electroencephalography (EEG) (N = 39), by comparing source space spectral power of healthy adults performing low and high load auditory WM tasks. Prior to the assessment, participants either underwent a modality-specific auditory WM training, or a modality-irrelevant tactile WM training, or were not trained (active control). After a modality-specific training participants showed higher behavioral performance, compared to the control. EEG data analysis revealed general effects of WM load, across all training groups, in the theta-, alpha-, and beta-frequency bands. With increased load theta-band power increased over frontal, and decreased over parietal areas. Centro-parietal alpha-band power and central beta-band power decreased with load. Interestingly, in the high load condition a tendency toward reduced beta-band power in the right medial temporal lobe was observed in the modality-specific WM training group compared to the modality-irrelevant and active control groups. Our finding that WM processing during the high load condition changed after modality-specific WM training, showing reduced beta-band activity in voice-selective regions, possibly indicates a more efficient maintenance of task-relevant stimuli. The general load effects suggest that WM performance at high load demands involves complementary mechanisms, combining a strengthening of task-relevant and a suppression of task-irrelevant processing

    Adaptive modality selection algorithm in robot-assisted cognitive training

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Interaction of socially assistive robots with users is based on social cues coming from different interaction modalities, such as speech or gestures. However, using all modalities at all times may be inefficient as it can overload the user with redundant information and increase the task completion time. Additionally, users may favor certain modalities over the other as a result of their disability or personal preference. In this paper, we propose an Adaptive Modality Selection (AMS) algorithm that chooses modalities depending on the state of the user and the environment, as well as user preferences. The variables that describe the environment and the user state are defined as resources, and we posit that modalities are successful if certain resources possess specific values during their use. Besides the resources, the proposed algorithm takes into account user preferences which it learns while interacting with users. We tested our algorithm in simulations, and we implemented it on a robotic system that provides cognitive training, specifically Sequential memory exercises. Experimental results show that it is possible to use only a subset of available modalities without compromising the interaction. Moreover, we see a trend for users to perform better when interacting with a system with implemented AMS algorithm.Peer ReviewedPostprint (author's final draft

    Modality effects in vocabulary acquisition

    No full text
    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials

    Cross Modal Distillation for Supervision Transfer

    Full text link
    In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at https://github.com/s-gupta/fast-rcnn/tree/distillationComment: Updated version (v2) contains additional experiments and result

    Cooperative Training of Deep Aggregation Networks for RGB-D Action Recognition

    Full text link
    A novel deep neural network training paradigm that exploits the conjoint information in multiple heterogeneous sources is proposed. Specifically, in a RGB-D based action recognition task, it cooperatively trains a single convolutional neural network (named c-ConvNet) on both RGB visual features and depth features, and deeply aggregates the two kinds of features for action recognition. Differently from the conventional ConvNet that learns the deep separable features for homogeneous modality-based classification with only one softmax loss function, the c-ConvNet enhances the discriminative power of the deeply learned features and weakens the undesired modality discrepancy by jointly optimizing a ranking loss and a softmax loss for both homogeneous and heterogeneous modalities. The ranking loss consists of intra-modality and cross-modality triplet losses, and it reduces both the intra-modality and cross-modality feature variations. Furthermore, the correlations between RGB and depth data are embedded in the c-ConvNet, and can be retrieved by either of the modalities and contribute to the recognition in the case even only one of the modalities is available. The proposed method was extensively evaluated on two large RGB-D action recognition datasets, ChaLearn LAP IsoGD and NTU RGB+D datasets, and one small dataset, SYSU 3D HOI, and achieved state-of-the-art results

    End-to-End Cross-Modality Retrieval with CCA Projections and Pairwise Ranking Loss

    Full text link
    Cross-modality retrieval encompasses retrieval tasks where the fetched items are of a different type than the search query, e.g., retrieving pictures relevant to a given text query. The state-of-the-art approach to cross-modality retrieval relies on learning a joint embedding space of the two modalities, where items from either modality are retrieved using nearest-neighbor search. In this work, we introduce a neural network layer based on Canonical Correlation Analysis (CCA) that learns better embedding spaces by analytically computing projections that maximize correlation. In contrast to previous approaches, the CCA Layer (CCAL) allows us to combine existing objectives for embedding space learning, such as pairwise ranking losses, with the optimal projections of CCA. We show the effectiveness of our approach for cross-modality retrieval on three different scenarios (text-to-image, audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a multi-view network using freely learned projections optimized by a pairwise ranking loss, especially when little training data is available (the code for all three methods is released at: https://github.com/CPJKU/cca_layer).Comment: Preliminary version of a paper published in the International Journal of Multimedia Information Retrieva

    Eccentric Resistance Training in Youth: Perspectives for Long-Term Athletic Development

    Get PDF
    The purpose of this narrative review is to discuss the role of eccentric resistance training in youth and how this training modality can be utilized within long-term physical development. Current literature on responses to eccentric exercise in youth has demonstrated that potential concerns, such as fatigue and muscle damage, compared to adults are not supported. Considering the importance of resistance training for youth athletes and the benefits of eccentric training in enhancing strength, power, speed, and resistance to injury, its inclusion throughout youth may be warranted. In this review we provide a brief overview of the physiological responses to exercise in youth with specific reference to the different responses to eccentric resistance training between children, adolescents, and adults. Thereafter, we discuss the importance of ensuring that force absorption qualities are trained throughout youth and how these may be influenced by growth and maturation. In particular, we propose practical methods on how eccentric resistance training methods can be implemented in youth via the inclusion of efficient landing mechanics, eccentric hamstrings strengthening and flywheel inertia training. This article proposes that the use of eccentric resistance training in youth should be considered a necessity to help develop both physical qualities that underpin sporting performance, as well as reducing injury risk. However, as with any other training modality implemented within youth, careful consideration should be given in accordance with an individual's maturity status, training history and technical competency as well as being underpinned by current long-term physical development guidelines
    corecore