9 research outputs found

    BabblePlay : An app for infants, controlled by infants, to improve early language outcomes

    Get PDF
    This project set out to develop an app for infants under one year of age that responds in real time to language-like infant utterances with attractive images on an iPad screen. Language-like vocalisations were defined as voiced utterances which were not high pitched squeals, nor shouts. The app, BabblePlay, was intended for use in psycholinguistic research to investigate the possible causal relationship between early canonical babble and early onset of word production. It is also designed for a clinical setting, (1) to illustrate the importance of feedback as a way to encourage infant vocalisations, and (2) to provide consonant production practice for infant populations that do not vocalise enough or who vocalise in an atypical way, specifically, autistic infants (once they have begun to produce consonants). This paper describes the development and testing of BabblePlay, which responds to an infant’s vocalisations with colourful moving shapes on the screen that are analogous to some features of the infant’s vocalization including loudness and duration. Validation testing showed high correlation between the app and two human judges in identifying vocalisations in 200 minutes of BabblePlay recordings, and a feasibility study conducted with 60 infants indicates that they can learn the contingency between their vocalisations and the appearance of shapes on the screen in one five minute BabblePlay session. BabblePlay meets the specification of being a simple and easy- to-use app. It has been shown to be a promising tool for research on infant language development that could lead to its use in home and professional environments to demonstrate the importance of immediate reward for vocal utterances to increase vocalisations in infants

    Rafigh: A Living Media System for Motivating Target Application Use for Children

    Get PDF
    Digital living media systems combine living media such as plants, animals and fungi with computational components. In this dissertation, I respond to the question of how can digital living media systems better motivate children to use target applications (i.e., learning and/or therapeutic applications)? To address this question, I employed a participatory design approach where I incorporated input from children, parents, speech language pathologists and teachers into the design of a new system. Rafigh is a digital embedded system that uses the growth of a living mushrooms colony to provide positive reinforcements to children when they conduct target activities. The growth of the mushrooms is affected by the amount of water administered to them, which in turn corresponds to the time children spend on target applications. I used an iterative design process to develop and evaluate three Rafigh prototypes. The evaluations showed that the system must be robust, customizable, and should include compelling engagement mechanisms to keep the children interested. I evaluated Rafigh using two case studies conducted in participants homes. In each case study, two siblings and their parent interacted with Rafigh over two weeks and the parents identified a series of target applications that Rafigh should motivate the children to use. The study showed that Rafigh motivated the children to spend significantly more time on target applications during the intervention phase and that it successfully engaged one out of two child participants in each case study who showed signs of responsibility, empathy and curiosity towards the living media. The study showed that the majority of participants described the relationship between using target applications and mushrooms growth correctly. Further, Rafigh encouraged more communication and collaboration between the participants. Rafighs slow responsivity did not impact the engagement of one out of two child participants in each case study and might even have contributed to their investment in the project. Finally, Rafighs presence as an ambient physical object allowed users to interact with it freely and as part of their home environment

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 3rd International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2003, held 10-12 December 2003, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    ‘Did I just do that?’ : Six-month-olds learn the contingency between their vocalisations and a visual reward in 5 minutes

    Get PDF
    It has been shown that infants can increase or modify a motorically available behaviour such as sucking, kicking, arm waving, etc., in response to a positive visual reinforcement (e.g., DeCasper & Fifer, 1980; Millar, 1990; Rochat & Striano, 1999; Rovee-Collier, 1997; Watson & Ramey, 1972). We tested infants to determine if they would also change their vocal behaviour in response to contingent feedback which lacks the social, emotional and auditory modelling typical of parent-child interaction. Here we show that in a single five-minute session infants increased the rate of their vocalisations in order to control the appearance of colourful shapes on an iPad screen. This is the first experimental study to demonstrate that infants can rapidly learn to increase their vocalisations when given positive reinforcement with no social element. This work sets the foundations for future studies into the causal relationship between the number of early vocalisations and the onset of words. In addition, there are potential clinical applications for reinforcing vocal practice in infant populations who are at risk for poor language skills

    SoundRise: sviluppo e validazione di un'applicazione multimodale interattiva per la didattica basata sull'analisi di feature vocali

    Get PDF
    In questo lavoro si presentano i passi e gli strumenti utilizzati per lo sviluppo di SoundRise a partire da un prototipo Pure Data e i primi test effettuati per valutare e migliorare il risultato ottenuto. SoundRise è un’applicazione multimodale e interattiva a fini didattici, finalizzata a proporre ai bambini della scuola primaria una modalità alternativa di apprendimento delle caratteristiche del suono. Il bambino può esplorare tali caratteristiche attraverso una rappresentazione grafica in tempo reale delle proprie produzioni vocali. Protagonista di questo gioco è il sole e mediante la sua posizione sull’orizzonte di un paesaggio stilizzato, le sue dimensioni e il suo colore si cerca di dare una raffigurazione coerente delle caratteristiche sonore. L’altezza del sole sull’orizzonte corrisponde all’altezza del tono prodotto dall’utente, la sua dimensione all’ampiezza della produzione, mentre la durata è rappresentata dal viso sorridente del sole che apre o chiude gli occhi in presenza o assenza di produzione vocale. Non potendo ottenere un’immagine ragionevole del timbro partendo dall’analisi delle caratteristiche della voce, quest’ultimo è sostituito con una visualizzazione delle cinque vocali della lingua italiana associando a ciascuna un colore col quale disegnare il sole. Le caratteristiche possono essere ispezionate singolarmente oppure tutte assieme, lasciando piena libertà di scelta al bambino. Si è cercato di proporre un’interfaccia semplice e amichevole, che fosse intuitiva e non causasse confusione o distraesse l’utente dall’obiettivo dell’applicazione, basandosi su alcune delle caratteristiche indicate come importanti per una applicazione di questo tipo nel lavoro di Anne-Marie Oster dove si valuta l’applicazione clinica di un’applicazione di speech training nel trattamento di bambini affetti da deficit uditivo. Tra gli obiettivi di questo lavoro vi è lo studio delle potenzialità offerte dall’uso di Pure Data e libpd, una libreria C++ che consente di integrare un’istanza di Pure Data all’interno di una applicazione qualsiasi. Questa libreria permette di utilizzare Pure Data per creare in modo semplice e veloce un prototipo di ciò che si vuole realizzare e di convertirlo successivamente in un’applicazione multi-piattaforma facilmente distribuibile, in quanto non dipendente dalla presenza di Pure Data nel computer dell’utente. Molto interessante è la possibilità di creare applicazioni per i principali sistemi operativi mobili a partire dallo stesso codice. Si è cercato inoltre di strutturare il progetto in modo da ottenere una piattaforma versatile e adattabile alla sperimentazione delle tecnologie di analisi audio nel trattamento di patologie che interessano la produzione vocale di base. Questo progetto è stato testato su sistemi Apple Mac OS X, Apple iOS, Microsoft WindowsXP e Microsoft Windows7. Il test e l’eventuale adattamento su piattaforme GNU/Linux e Google Android saranno oggetto di lavori futuri. Nel primo capitolo si propone una selezione di lavori riguardanti applicazioni a supporto di logopedisti e terapeuti nel trattamento di patologie che riguardano la produzione vocale, sia dovute a deficit fisici che neurologici. Infine si propone una panoramica sul campo molto recente delle applicazioni per dispositivi mobili a supporto di persone affette da disturbo autistico, dei loro familiari o dei terapeuti che si occupano di queste sindromi. Nel secondo capitolo si presentano gli obiettivi di questo lavoro, un’analisi degli strumenti utilizzati, le caratteristiche dei progetti per lo sviluppo delle tre versioni dell’applicazione e le linee guida da rispettare per realizzare una patch Pure Data adatta ad essere utilizzata con libpd. Infine si descrivono le parti pi`u interessanti del codice sorgente prodotto. Nel terzo capitolo si illustrano le caratteristiche offerte dell’interfaccia grafica dell’applicazione e le modalit`a di utilizzo. Infine nel quarto capitolo vengono esposti i risultati di un test sull’usabilit`a dell’applicazione sottoposto a un pubblico eterogeneo di utent

    Tracking Visible Features of Speech for Computer-Based Speech Therapy for Childhood Apraxia of Speech

    Get PDF
    At present, there are few, if any, effective computer-based speech therapy systems (CBSTs) that support the at-home component for clinical interventions for Childhood Apraxia of Speech (CAS). PROMPT, an established speech therapy intervention for CAS, has the potential to be supported via a CBST, which could increase engagement and provide valuable feedback to the child. However, the necessary computational techniques have not yet been developed and evaluated. In this thesis, I will describe the development of some of the key underlying computational components that are required for the development of such a system. These components concern camera-based tracking of visible features of speech which concern jaw kinematics. These components would also be necessary for the serious game that we have envisioned

    Encouraging the expression of the unspeakable : influence and agency in a robotic creature

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 165-177) and index.The boundary between subject and object is becoming ever-the-more blurred by the creation of new types of computational objects. Especially when these objects take the form of robotic creatures do we get to question the powerful impact of the object on the person. Couple this with the expression of internal, unspoken experience through the making of non-speech sounds and we have a situation that demands new thoughts and new methodologies. This thesis works through these questions via the design and study of syngvab, a robotic marionette that moves in response to human non-speech vocal sounds. I draw from the world of puppetry and performing objects in the creation of syngvab the object and its stage, showing how this old tradition is directly relevant for the development of non-anthropomorphic, non-zoomorphic robotic creatures. I show how this mongrel of an object requires different methodologies of study, pulling from actor-network theory to examine syngvab in a symmetric manner with the human participants. The results of a case study interaction with syngvab support the contention that non-speech sounds as drawn out by a robotic creature are a potent means of exploring and investigating the unspeakable.by Nicholas A. Knouf.S.M

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 4th International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2005, held 29-31 October 2005, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    Designing a parent-driven coaching system for indirect speech therapy

    Get PDF
    Ph. D. ThesisBased on UK Department of Education annual report 2017, seven percent of preschool children experience speech and language developmental delays. The report goes on to argue that these delays negatively impact success at school. Such delays are more common amongst children with cerebral palsy or autism. Early intervention therapy is recognised as being vital in minimising the long-term impact of such delays and the responsibilities for delivering such therapies most often lies with parents or primary carers. Therapists typically support parents by providing speech and language therapy sessions. The primary goal of these sessions are; to teach the parents techniques to promote the children’s communication skills, identify communication opportunities, and adopt and adapt learned communication strategies in their everyday interactions with the children in their natural environment. While parent-delivered therapies can alleviate the demand on therapists and healthcare services by reducing the amount of professional contact time, they can also create an overwhelming burden on parents. This thesis is an in-depth exploration of early speech therapy programs and identifies the values and support needs that can used to understand the parents’ and therapists’ experiences as well as identify indictors to improve therapy adoption in this context. Additionally, this research investigates the role of coaching technology in improving communication and collaboration between parents. New parent-driven coaching technologies to support reflections on home practices and address the challenges of home therapy delivery is also presented. A case study approach is undertaken to explore this area with two different clinical partners and therapy protocols. Each study commences with a contextual investigation and moves toward co-design and evaluation of digital solutions with therapists and parents. The first case study, eSALT, presents the design of KeepCam, a parent-led selective data capture and sharing tool to support parents of children with cerebral palsy. The second case study presents the design of ePACT, a self-reflection tool to support parents of children with autism. This thesis reports on how mobile video coaching tools can be used as an external drive for continuous engagement with therapy programs and facilitate social support. It also identifies opportunities for technology to play important roles in supporting early therapy programs. The thesis draws upon these case studies to inform the design of a responsive model of support for indirect therapies, through which the role of design and power relations in healthcare are explored.Saudi Ministry of Education, King Saud Universit
    corecore