29,460 research outputs found

    Sound symbolism facilitates long-term retention of the semantic representation of novel verbs in three-year-olds

    Get PDF
    Previous research has shown that sound symbolism facilitates action label learning when the test trial used to assess learning immediately followed the training trial in which the (novel) verb was taught. The current study investigated whether sound symbolism benefits verb learning in the long term. Forty-nine children were taught either sound-symbolically matching or mismatching pairs made up of a novel verb and an action video. The following day, the children were asked whether a verb can be used for a scene shown in a video. They were tested with four videos for each word they had been taught. The four videos differed as to whether they contained the same or different actions and actors as in the training video: (1) same-action, same-actor; (2) same-action, different-actor; (3) different-action, same-actor; and (4) different-action, different-actor. The results showed that sound symbolism significantly improved the childrens’ ability to encode the semantic representation of the novel verb and correctly generalise it to a new event the following day. A control experiment ruled out the possibility that children were generalising to the “same-action, different-actor” video because they did not recognize the actor change due to the memory decay. Nineteen children were presented with the stimulus videos that had also been shown to children in the sound symbolic match condition in Experiment 1, but this time the videos were not labeled. In the test session the following day, the experimenter tested the children’s recognition memory for the videos. The results indicated that the children could detect the actor change from the original training video a day later. The results of the main experiment and the control experiment support the idea that a motivated (iconic) link between form and meaning facilitates the symbolic development in children. The current study, along with recent related studies, provided further evidence for an iconic advantage in symbol development in the domain of verb learning. A motivated form-meaning relationship can help children learn new words and store them long term in the mental lexicon

    Machine Learning approach to sport activity recognition from inertial data

    Get PDF
    In this thesis we consider an Activity recognition problem for Cross-Country Skiing; the goal of this work is to recognize different types of Cross Country techniques from inertial sensors equipped on a wear- able device. We want to apply the SAX technique to the acceleration signals, specifically on the Atomic Gestures extracted from them. Using the SAX Distance we want to able to recognize which activity an athlete is performin

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative
    corecore