306 research outputs found

    Children's Gestures from 18 to 30 Months

    Get PDF
    This thesis concerns the nature of the gestures performed by five Swedish children. The children are followed from 18 to 30 months of age: an age range which is characterized by a rapid succession of developmental changes in children's abilities to communicate by means of both spoken language and gesture. There are few studies of gesture in children of these ages, making it essential to ask a number of basic questions: What sort of gestural actions do the children perform? How does the use of gesture change over time, from 18 to 30 months of age? How are the gestures performed in coordination with speech? The answers provided to these questions are both quantitative and qualitative in kind. Several transitions in the use of gesture are identified, relating to developmental changes in the organization of speech — highlighting the symbiotic relationship between gesture and speech in the communicative ecology. Considerable attention is paid to the even more basic question of what sort of actions qualify for the label "gesture". Instead of treating gestural qualities as a matter of a binary distinction between actions counting as gesture and those that do not, a multi-level approach is advocated. This approach allows for descriptions of gestures in terms of several different levels of complexity. Furthermore, a distinction is made between levels of communicative explicitness on the one hand, and levels of semiotic complexity on the other. This distinction allows for the recognition that some gestural actions are semiotically complex, without being explicitly communicative, and vice versa: that some gestural actions are explicitly communicative, without being semiotically complex. The latter is particularly consequential for this thesis, since a large number of communicative gestural actions reside in the borderland between practical action and expressive gesture. Hence, the gestures analyzed include not only the prototypical "empty-handed" gestures, but also gestures that involve handling of physical objects. Overall, the role of conventionality in children's gestures is underscored. The approach is (a) cognitive in the sense that it pays attention to the knowledge and bodily skills involved in the performance of the gestures, (b) social and interactive in the sense that it views gestures as visible and accountable parts of mutually organized social activities, and (c) semiotic in the sense that the analysis tries to explicate how signification is brought about, in contrast to treating the meanings of gestures as transparently given, the way participants themselves often do when engaged in social interaction

    Robotin käden ja kyynärvarren toteutus

    Get PDF
    Tiivistelmä. Robotiikka on alana jatkuvassa nousussa, ja se kehittyy huimaa vauhtia. Kärkipään robotiikan tutkimus on kallista ja kehittynyttä tiedettä, mutta myös yksittäisille ihmisille on tullut mahdolliseksi tutkia ja harrastaa robotiikkaa. Varsinkin vapaasti saatavilla olevien ohjelmistojen, sekä avoimen lähdekoodin sovellusten merkitys yksittäisen ihmisen robotiikan harrastamiselle on suuri. Tämän työn tarkoituksena on olla tukena ja ensiaskeleena harrastajille sekä oppilaille, jotka haluavat lähteä toteuttamaan omaa robottitoteutustaan. Tässä opinnäytetyössä pääpisteenä on InMoov robottimallin käden rakentaminen. Tähän sisältyy fyysisen InMoov-robottikäden kokoaminen, kuuden Dynamixel servomoottorin asentaminen ja niiden toimivaksi saattaminen Robot Operating System 2-kirjaston kanssa. Työssä on kirjoitettu Python-ohjelmointikieltä käyttäen asiakasohjelma, jolla ohjataan Dynamixelservomoottoreita, jotka ohjaavat käden sormien ja ranteen niveliä tehden käden liikkeestä ihmismäisen. Verrattuna robotiikan teknologian huipputasoon, projektissa rakennettu käsi on yksinkertainen, mutta se kuvastaa hyvin eikaupallisten tai tuettuen tekijöiden mahdollisuuksia rakentaa robotiikan sovelluksia. Työn lopussa pohditaan projektin onnistumista ja sen jatkomahdollisuuksia. Työ onnistui tavoitteessaan saada aikaan toimiva robottikäsikokonaisuus joka kykenee herättämään huomiota ja liikkumaan niin itsenäisesti kuin osana suurempaa kokonaisuutta. Rakenteelliset ja yksinkertaisuudesta johtuvat ongelmat tiedostetaan ja niistä varoitetaan, jotta työn lukijat voivat välttää vastaavat ongelmat. Jatkokehitystä ajatellen pohditaan käden mahdollisuuksia olla osana kokonaista InMoov-robottia ja käden modulaarisuutta. Lisäksi nostetaan esiin, sekä ohjelmistopuolen skaalautuvuuden, että sen helppokäyttöisyyden ongelmat ja onnistumiset.Robot hand and forearm implementation. Abstract. The robotics sector is on a constant rise and the trajectory of advancements in the area is staggering. The cost of high-end robotics research is way beyond something that a regular person could hope to do, but fortunately, there are also possibilities for low-end robotics work and research. This is done mainly through freeware and open-source software, and by using cheap parts like 3D-printed shells. This thesis is meant to guide and help students and hobbyists who are interested in researching and implementing their own low-end robotics solutions. The main focus of this thesis is building an InMoov robot arm, installation of six Dynamixel servomotors into the arm, and fnally making a working client for the hand using Robot Operating System 2-library. The client has been done using Python-programming language, which controls the Dynamixel servomotors, which in turn control the fngers and the wrist of the arm, trying to simulate a human hand as closely as possible. In comparison to the top of the robotics research, the hand is quite rudimentary, but it showcases the possibilities of low-end robotics research done by hobbyists and students. The end of the thesis goes through how the project succeeded and what could still be improved upon and how the work can be further developed in the future. The project succeeded in its main goal of building a working robotic hand that can attract attention and showcases the area of low-end robotics and its modularity. The problems caused by the part quality and the simplicity of the hand design are touched upon, to raise awareness of these issues. Future development is considered through the modularity of the project and how it could be expanded into a full-fedged InMoov robot torso. The problems and successes in the software scalability and ease of use are also expanded upon

    An Investigation on the Cognitive Effects of Emoji Usage in Text

    Get PDF

    Translating figurative language for the screen: an analysis of the strategies implemented in the Italian version of 'Good Omens'

    Get PDF
    La presente tesi si propone di analizzare le principali strategie di traduzione utilizzate per una trasposizione adeguata del linguaggio figurato presente nella miniserie Amazon 'Good Omens' dall'inglese all'italiano.The present dissertation intends to provide an analysis of the main translation strategies that have been implemented for an adequate transposition of the instances of figurative language found in the Amazon tv series 'Good Omens' from English into Italian

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots
    corecore