78 research outputs found

    ICMI 2012 chairs' welcome

    Get PDF
    Welcome to Santa Monica and to the 14th edition of the International Conference on Multimodal Interaction, ICMI 2012. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. We had a record number of submissions this year: 147 (74 long papers, 49 short papers, 5 special session papers and 19 demo papers). From these submissions, we accepted 15 papers for long oral presentation (20.3% acceptance rate), 10 papers for short oral presentation (20.4% acceptance rate) and 19 papers presented as posters. We have a total acceptance rate of 35.8% for all short and long papers. 12 of the 19 demo papers were accepted. All 5 special session papers were directly invited by the organizers and the papers were all accepted. In addition, the program includes three invited Keynote talks. One of the two novelties introduced at ICMI this year is the Multimodal Grand Challenges. Developing systems that can robustly understand human-human communication or respond to human input requires identifying the best algorithms and their failure modes. In fields such as computer vision, speech recognition, and computational linguistics, the availability of datasets and common tasks have led to great progress. This year, we accepted four challenge workshops: the Audio-Visual Emotion Challenge (AVEC), the Haptic Voice Recognition challenge, the D-META challenge and Brain-Computer Interface challenge. Stefanie Telex and Daniel Gatica-Perez are co-chairing the grand challenge this year. All four Grand Challenges will be presented on Monday, October 22nd, and a summary session will be happening on Wednesday, October 24th, afternoon during the main conference. The second novelty at ICMI this year is the Doctoral Consortium—a separate, one-day event to take place on Monday, October 22nd, co-chaired by Bilge Mutlu and Carlos Busso. The goal of the Doctoral Consortium is to provide Ph.D. students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial backgrounds and institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing multimodal interfaces. All accepted students receive a travel grant to attend the conference. From among 25 applications, 14 students were accepted for participation and to receive travel funding. The organizers thank the National Science Foundation (award IIS-1249319) and conference sponsors for financial support

    Dissecting the Shared Genetic Architecture of Suicide Attempt, Psychiatric Disorders, and Known Risk Factors

    Get PDF
    Background Suicide is a leading cause of death worldwide, and nonfatal suicide attempts, which occur far more frequently, are a major source of disability and social and economic burden. Both have substantial genetic etiology, which is partially shared and partially distinct from that of related psychiatric disorders. Methods We conducted a genome-wide association study (GWAS) of 29,782 suicide attempt (SA) cases and 519,961 controls in the International Suicide Genetics Consortium (ISGC). The GWAS of SA was conditioned on psychiatric disorders using GWAS summary statistics via multitrait-based conditional and joint analysis, to remove genetic effects on SA mediated by psychiatric disorders. We investigated the shared and divergent genetic architectures of SA, psychiatric disorders, and other known risk factors. Results Two loci reached genome-wide significance for SA: the major histocompatibility complex and an intergenic locus on chromosome 7, the latter of which remained associated with SA after conditioning on psychiatric disorders and replicated in an independent cohort from the Million Veteran Program. This locus has been implicated in risk-taking behavior, smoking, and insomnia. SA showed strong genetic correlation with psychiatric disorders, particularly major depression, and also with smoking, pain, risk-taking behavior, sleep disturbances, lower educational attainment, reproductive traits, lower socioeconomic status, and poorer general health. After conditioning on psychiatric disorders, the genetic correlations between SA and psychiatric disorders decreased, whereas those with nonpsychiatric traits remained largely unchanged. Conclusions Our results identify a risk locus that contributes more strongly to SA than other phenotypes and suggest a shared underlying biology between SA and known risk factors that is not mediated by psychiatric disorders.Peer reviewe

    Facilitating multiparty dialog with gaze, gesture, and speech

    No full text
    We study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We review a computational framework for turn taking that provides the foundation for tracking and communicating intentions to hold, release, or take control of the conversational floor. We then present details of the implementation of the approach in an embodied conversational agent and describe experiments with the system in a shared task setting. Finally, we discuss results showing how the verbal and non-verbal cues used by the avatar can shape the dynamics of multiparty conversation

    Open-World Dialog: Challenges, Directions, and Prototype

    No full text
    We present an investigation of open-world dialog, centering on building and studying systems that can engage in conversation in an open-world context, where multiple people with different needs, goals, and long-term plans may enter, interact, and leave an environment. We outline and discuss a set of challenges and core competencies required for supporting the kind of fluid multiparty interaction that people expect when conversing and collaborating with other people. Then, we focus as a concrete example on the challenges faced by receptionists who field requests at the entries to corporate buildings. We review the subtleties and difficulties of creating an automated receptionist that can work with people on solving their needs with the ease and etiquette expected from a human receptionist, and we discuss details of the construction and operation of a working prototype. 1

    Models for Multiparty Engagement in Open-World Dialog

    No full text
    We present computational models that allow spoken dialog systems to handle multiparticipant engagement in open, dynamic environments, where multiple people may enter and leave conversations, and interact with the system and with others in a natural manner. The models for managing the engagement process include components for (1) sensing the engagement state, actions and intentions of multiple agents in the scene, (2) making engagement decisions (i.e. whom to engage with, and when) and (3) rendering these decisions in a set of coordinated low-level behaviors in an embodied conversational agent. We review results from a study of interactions "in the wild " with a system that implements such a model.
    corecore