4,377 research outputs found

    Continuous Action Recognition Based on Sequence Alignment

    Get PDF
    Continuous action recognition is more challenging than isolated recognition because classification and segmentation must be simultaneously carried out. We build on the well known dynamic time warping (DTW) framework and devise a novel visual alignment technique, namely dynamic frame warping (DFW), which performs isolated recognition based on per-frame representation of videos, and on aligning a test sequence with a model sequence. Moreover, we propose two extensions which enable to perform recognition concomitant with segmentation, namely one-pass DFW and two-pass DFW. These two methods have their roots in the domain of continuous recognition of speech and, to the best of our knowledge, their extension to continuous visual action recognition has been overlooked. We test and illustrate the proposed techniques with a recently released dataset (RAVEL) and with two public-domain datasets widely used in action recognition (Hollywood-1 and Hollywood-2). We also compare the performances of the proposed isolated and continuous recognition algorithms with several recently published methods

    Learning About Meetings

    Get PDF
    Most people participate in meetings almost every day, multiple times a day. The study of meetings is important, but also challenging, as it requires an understanding of social signals and complex interpersonal dynamics. Our aim this work is to use a data-driven approach to the science of meetings. We provide tentative evidence that: i) it is possible to automatically detect when during the meeting a key decision is taking place, from analyzing only the local dialogue acts, ii) there are common patterns in the way social dialogue acts are interspersed throughout a meeting, iii) at the time key decisions are made, the amount of time left in the meeting can be predicted from the amount of time that has passed, iv) it is often possible to predict whether a proposal during a meeting will be accepted or rejected based entirely on the language (the set of persuasive words) used by the speaker

    Conditional Random Field Autoencoders for Unsupervised Structured Prediction

    Full text link
    We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is (re)generated, conditional on the latent structure, using models for which maximum likelihood estimation has a closed-form. Our autoencoder formulation enables efficient learning without making unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines

    Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems

    Full text link
    This paper presents the Frames dataset (Frames is available at http://datasets.maluuba.com/Frames), a corpus of 1369 human-human dialogues with an average of 15 turns per dialogue. We developed this dataset to study the role of memory in goal-oriented dialogue systems. Based on Frames, we introduce a task called frame tracking, which extends state tracking to a setting where several states are tracked simultaneously. We propose a baseline model for this task. We show that Frames can also be used to study memory in dialogue management and information presentation through natural language generation

    Development Considerations for Implementing a Voice-Controlled Spacecraft System

    Get PDF
    As computational power and speech recognition algorithms improve, the consumer market will see better-performing speech recognition applications. The cell phone and Internet-related service industry have further enhanced speech recognition applications using artificial intelligence and statistical data-mining techniques. These improvements to speech recognition technology (SRT) may one day help astronauts on future deep space human missions that require control of complex spacecraft systems or spacesuit applications by voice. Though SRT and more advanced speech recognition techniques show promise, use of this technology for a space application such as vehicle/habitat/spacesuit requires careful considerations. This paper provides considerations and guidance for the use of SRT in voice-controlled spacecraft systems (VCSS) applications for space missions, specifically in command-and-control (C2) applications where the commanding is user-initiated. First, current SRT limitations as known at the time of this report are given. Then, highlights of SRT used in the space program provide the reader with a history of some of the human spaceflight applications and research. Next, an overview of the speech production process and the intrinsic variations of speech are provided. Finally, general guidance and considerations are given for the development of a VCSS using a human-centered design approach for space applications that includes vocabulary selection and performance testing, as well as VCSS considerations for C2 dialogue management design, feedback, error handling, and evaluation/usability testing
    • …
    corecore