7,274 research outputs found

    Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007

    Get PDF
    This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p

    Language learning in aphasia: A narrative review and critical analysis of the literature with implications for language therapy

    Full text link
    People with aphasia (PWA) present with language deficits including word retrieval difficulties after brain damage. Language learning is an essential life-long human capacity that may support treatment-induced lan-guage recovery after brain insult. This prospect has motivated a growing interest in the study of language learning in PWA during the last few decades. Here, we critically review the current literature on language learning ability in aphasia. The existing studies in this area indicate that (i) language learning can remain functional in some PWA, (ii) inter-individual variability in learning performance is large in PWA, (iii) language processing, short-term memory and lesion site are associated with learning ability, (iv) preliminary evidence suggests a relationship between learning ability and treatment outcomes in this population. Based on the reviewed evidence, we propose a potential account for the interplay between language and memory/learning systems to explain spared/impaired language learning and its relationship to language therapy in PWA. Finally, we indicate potential avenues for future research that may promote more cross-talk between cognitive neuro-science and aphasia rehabilitation

    Genie: A Generator of Natural Language Semantic Parsers for Virtual Assistant Commands

    Full text link
    To understand diverse natural language commands, virtual assistants today are trained with numerous labor-intensive, manually annotated sentences. This paper presents a methodology and the Genie toolkit that can handle new compound commands with significantly less manual effort. We advocate formalizing the capability of virtual assistants with a Virtual Assistant Programming Language (VAPL) and using a neural semantic parser to translate natural language into VAPL code. Genie needs only a small realistic set of input sentences for validating the neural model. Developers write templates to synthesize data; Genie uses crowdsourced paraphrases and data augmentation, along with the synthesized data, to train a semantic parser. We also propose design principles that make VAPL languages amenable to natural language translation. We apply these principles to revise ThingTalk, the language used by the Almond virtual assistant. We use Genie to build the first semantic parser that can support compound virtual assistants commands with unquoted free-form parameters. Genie achieves a 62% accuracy on realistic user inputs. We demonstrate Genie's generality by showing a 19% and 31% improvement over the previous state of the art on a music skill, aggregate functions, and access control.Comment: To appear in PLDI 201
    • …
    corecore