2,008 research outputs found

    The spatiotemporal representation of dance and music gestures using topological gesture analysis (TGA)

    Get PDF
    SPATIOTEMPORAL GESTURES IN MUSIC AND DANCE HAVE been approached using both qualitative and quantitative research methods. Applying quantitative methods has offered new perspectives but imposed several constraints such as artificial metric systems, weak links with qualitative information, and incomplete accounts of variability. In this study, we tackle these problems using concepts from topology to analyze gestural relationships in space. The Topological Gesture Analysis (TGA) relies on the projection of musical cues onto gesture trajectories, which generates point clouds in a three-dimensional space. Point clouds can be interpreted as topologies equipped with musical qualities, which gives us an idea about the relationships between gesture, space, and music. Using this method, we investigate the relationships between musical meter, dance style, and expertise in two popular dances (samba and Charleston). The results show how musical meter is encoded in the dancer's space and how relevant information about styles and expertise can be revealed by means of simple topological relationships

    Sketching space

    Get PDF
    In this paper, we present a sketch modelling system which we call Stilton. The program resembles a desktop VRML browser, allowing a user to navigate a three-dimensional model in a perspective projection, or panoramic photographs, which the program maps onto the scene as a `floor' and `walls'. We place an imaginary two-dimensional drawing plane in front of the user, and any geometric information that user sketches onto this plane may be reconstructed to form solid objects through an optimization process. We show how the system can be used to reconstruct geometry from panoramic images, or to add new objects to an existing model. While panoramic imaging can greatly assist with some aspects of site familiarization and qualitative assessment of a site, without the addition of some foreground geometry they offer only limited utility in a design context. Therefore, we suggest that the system may be of use in `just-in-time' CAD recovery of complex environments, such as shop floors, or construction sites, by recovering objects through sketched overlays, where other methods such as automatic line-retrieval may be impossible. The result of using the system in this manner is the `sketching of space' - sketching out a volume around the user - and once the geometry has been recovered, the designer is free to quickly sketch design ideas into the newly constructed context, or analyze the space around them. Although end-user trials have not, as yet, been undertaken we believe that this implementation may afford a user-interface that is both accessible and robust, and that the rapid growth of pen-computing devices will further stimulate activity in this area

    Exploring the Referral and Usage of Science Fiction in HCI Literature

    Full text link
    Research on science fiction (sci-fi) in scientific publications has indicated the usage of sci-fi stories, movies or shows to inspire novel Human-Computer Interaction (HCI) research. Yet no studies have analysed sci-fi in a top-ranked computer science conference at present. For that reason, we examine the CHI main track for the presence and nature of sci-fi referrals in relationship to HCI research. We search for six sci-fi terms in a dataset of 5812 CHI main proceedings and code the context of 175 sci-fi referrals in 83 papers indexed in the CHI main track. In our results, we categorize these papers into five contemporary HCI research themes wherein sci-fi and HCI interconnect: 1) Theoretical Design Research; 2) New Interactions; 3) Human-Body Modification or Extension; 4) Human-Robot Interaction and Artificial Intelligence; and 5) Visions of Computing and HCI. In conclusion, we discuss results and implications located in the promising arena of sci-fi and HCI research.Comment: v1: 20 pages, 4 figures, 3 tables, HCI International 2018 accepted submission v2: 20 pages, 4 figures, 3 tables, added link/doi for Springer proceedin

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

    Full text link
    Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and shared publicly on the Web

    Panel: The Architectural Touch: Gestural Approaches to Library Search

    Get PDF
    This panel centers on the LibViz project—a touch and gesture-based interface that allows users to navigate through library collections using visual queries—and the issues surrounding such efforts. The LibViz project, for which we have done initial research and constructed a prototype, aims to increase the discoverability of library materials, particularly those of non-textual objects, which are difficult to access via traditional search and which do not circulate. Many collections are currently preparing large scale digitizing of threedimensional objects and it is imperative to develop appropriate methods to work with this new kind of data. The established methods only do a poor job at providing access to 3D-object data. Based in theories of “grounded cognition,” the LibViz interface will be optimized for use on personal mobile devices, but it can also be used on large format touch screens equipped with depth cameras that track user gestures. In other words, the interactive flow of LibViz allows both gestural interaction and touch commands, effectively extending the sensory modalities involved in the cognitive processing of the search results. By engaging a fuller range of human cognitive capabilities, the LibViz interface also hopes to help transform search. The amount of data generated in the digital era is growing exponentially, and so we must find novel ways of analyzing and interpreting these vast data archives. Moreover, the ways in which information is categorized and databases are created are value laden. As such, the processes by which these structures are established should be more transparent than conventional systems currently allow. The project turns library search into a powerful and pleasurable experience, stimulating engagement with the collections and the library itself

    Explorative Study on Asymmetric Sketch Interactions for Object Retrieval in Virtual Reality

    Get PDF
    Drawing tools for Virtual Reality (VR) enable users to model 3D designs from within the virtual environment itself. These tools employ sketching and sculpting techniques known from desktop-based interfaces and apply them to hand-based controller interaction. While these techniques allow for mid-air sketching of basic shapes, it remains difficult for users to create detailed and comprehensive 3D models. Our work focuses on supporting the user in designing the virtual environment around them by enhancing sketch-based interfaces with a supporting system for interactive model retrieval. An immersed user can query a database containing detailed 3D models and replace them with the virtual environment through sketching. To understand supportive sketching within a virtual environment, we made an explorative comparison between asymmetric methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. Our work shows that different patterns emerge when users interact with 3D sketches rather than 2D sketches to compensate for different results from the retrieval system. In particular, the user adopts strategies when drawing on canvas of different sizes or using a physical device instead of a virtual canvas. While we pose our work as a retrieval problem for 3D models of chairs, our results can be extrapolated to other sketching tasks for virtual environments

    Recognition of nonmanual markers in American Sign Language (ASL) using non-parametric adaptive 2D-3D face tracking

    Full text link
    This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University
    corecore