1,307 research outputs found

    Real-Time Particle Simulation and Motion Capture Interaction

    Get PDF
    This thesis presents the design and development of a program that allows artists to explore and create visual effects from the interaction between a particle system and motion captured data. A spatial subdivision scheme was developed to ensure fast and efficient particle-mesh collisions, allowing the user to interact with the system as it runs. Motion captured data was applied to create different animation routines which include a tango electronica dance and a tribal magician choreography. The particle system was created to work in conjunction with these animations and developed to be as versatile as possible, allowing for a multitude of effects from the interaction of the particles with the mesh

    Take the Lead: Toward a Virtual Video Dance Partner

    Get PDF
    My work focuses on taking a single person as input and predicting the intentional movement of one dance partner based on the other dance partner\u27s movement. Human pose estimation has been applied to dance and computer vision, but many existing applications focus on a single individual or multiple individuals performing. Currently there are very few works that focus specifically on dance couples combined with pose prediction. This thesis is applicable to the entertainment and gaming industry by training people to dance with a virtual dance partner. Many existing interactive or virtual dance partners require a motion capture system, multiple cameras or a robot which creates an expensive cost. This thesis does not use a motion capture system and combines OpenPose with swing dance YouTube videos to create a virtual dance partner. By taking in the current dancer\u27s moves as input, the system predicts the dance partner\u27s corresponding moves in the video frames. In order to create a virtual dance partner, datasets that contain information about the skeleton keypoints are necessary to predict a dance partner\u27s pose. There are existing dance datasets for a specific type of dance, but these datasets do not cover swing dance. Furthermore, the dance datasets that do include swing have a limited number of videos. The contribution of this thesis is a large swing dataset that contains three different types of swing dance: East Coast, Lindy Hop and West Coast. I also provide a basic framework to extend the work to create a real-time and interactive dance partner

    Real-time Body Tracking and Projection Mapping in the Interactive Arts

    Get PDF
    Projection mapping, a subtopic of augmented reality, displays computer-generated light visualizations from projectors onto the real environment. A challenge for projection mapping in performing interactive arts is dynamic body movements. Accuracy and speed are key components for an immersive application of body projection mapping and dependent on scanning and processing time. This thesis presents a novel technique to achieve real-time body projection mapping utilizing a state of the art body tracking device, Microsoft’s Azure Kinect DK, by using an array of trackers for error minimization and movement prediction. The device\u27s Sensor and Bodytracking SDKs allow multiple device synchronization. We combine our tracking results from this feature with motion prediction to provide an accurate approximation for body joint tracking. Using the new joint approximations and the depth information from the Kinect, we create a silhouette and map textures and animations to it before projecting it back onto the user. Our implementation of gesture detection provides interaction between the user and the projected images. Our results decreased the lag time created from the devices, code, and projector to create a realistic real-time body projection mapping. Our end goal was to display it in an art show. This thesis was presented at Burning Man 2019 and Delfines de San Carlos 2020 as interactive art installations

    Integrating 3D Objects and Pose Estimation for Multimodal Video Annotations

    Get PDF
    With the recent technological advancements, using video has become a focal point on many ubiquitous activities, from presenting ideas to our peers to studying specific events or even simply storing relevant video clips. As a result, taking or making notes can become an invaluable tool in this process by helping us to retain knowledge, document information, or simply reason about recorded contents. This thesis introduces new features for a pre-existing Web-Based multimodal anno- tation tool, namely the integration of 3D components in the current system and pose estimation algorithms aimed at the moving elements in the multimedia content. There- fore, the 3D developments will allow the user to experience a more immersive interaction with the tool by being able to visualize 3D objects either in a neutral or 360º background to then use them as traditional annotations. Afterwards, mechanisms for successfully integrating these 3D models on the currently loaded video will be explored, along with a detailed overview of the use of keypoints (pose estimation) to highlight details in this same setting. The goal of this thesis will thus be the development and evaluation of these features seeking the construction of a virtual environment in which a user can successfully work on a video by combining different types of annotations.Ao longo dos anos, a utilização de video tornou-se um aspecto fundamental em várias das atividades realizadas no quotidiano como seja em demonstrações e apresentações profissionais, para a análise minuciosa de detalhes visuais ou até simplesmente para preservar videos considerados relevantes. Deste modo, o uso de anotações no decorrer destes processos e semelhantes, constitui um fator de elevada importância ao melhorar potencialmente a nossa compreensão relativa aos conteúdos em causa e também a ajudar a reter características importantes ou a documentar informação pertinente. Efetivamente, nesta tese pretende-se introduzir novas funcionalidades para uma fer- ramenta de anotação multimodal, nomeadamente, a integração de componentes 3D no sistema atual e algorítmos de Pose Estimation com vista à deteção de elementos em mo- vimento em video. Assim, com estas features procura-se proporcionar um experiência mais imersiva ao utilizador ao permitir, por exemplo, a visualização preliminar de objec- tos num plano tridimensional em fundos neutros ou até 360º antes de os utilizar como elementos de anotação tradicionais. Com efeito, serão explorados mecanismos para a integração eficiente destes modelos 3D em video juntamente com o uso de keypoints (pose estimation) permitindo acentuar pormenores neste ambiente de visualização. O objetivo desta tese será, assim, o desenvol- vimento e avaliação continuada destas funcionalidades de modo a potenciar o seu uso em ambientes virtuais em simultaneo com as diferentes tipos de anotações já existentes

    Signature Movements Lead to Efficient Search for Threatening Actions

    Get PDF
    The ability to find and evade fighting persons in a crowd is potentially life-saving. To investigate how the visual system processes threatening actions, we employed a visual search paradigm with threatening boxer targets among emotionally-neutral walker distractors, and vice versa. We found that a boxer popped out for both intact and scrambled actions, whereas walkers did not. A reverse correlation analysis revealed that observers' responses clustered around the time of the “punch", a signature movement of boxing actions, but not around specific movements of the walker. These findings support the existence of a detector for signature movements in action perception. This detector helps in rapidly detecting aggressive behavior in a crowd, potentially through an expedited (sub)cortical threat-detection mechanism

    Indexing of fictional video content for event detection and summarisation

    Get PDF
    This paper presents an approach to movie video indexing that utilises audiovisual analysis to detect important and meaningful temporal video segments, that we term events. We consider three event classes, corresponding to dialogues, action sequences, and montages, where the latter also includes musical sequences. These three event classes are intuitive for a viewer to understand and recognise whilst accounting for over 90% of the content of most movies. To detect events we leverage traditional filmmaking principles and map these to a set of computable low-level audiovisual features. Finite state machines (FSMs) are used to detect when temporal sequences of specific features occur. A set of heuristics, again inspired by filmmaking conventions, are then applied to the output of multiple FSMs to detect the required events. A movie search system, named MovieBrowser, built upon this approach is also described. The overall approach is evaluated against a ground truth of over twenty-three hours of movie content drawn from various genres and consistently obtains high precision and recall for all event classes. A user experiment designed to evaluate the usefulness of an event-based structure for both searching and browsing movie archives is also described and the results indicate the usefulness of the proposed approach

    MotionDesigner: a tool for creating interactive performances using RGB-D cameras

    Get PDF
    In the last two decades the use of technology in art projects has proliferated, as is the case of the interactive projections based on movement used in art performances and installations. However, the artists responsible for creating this work typically have to rely on computer experts to implement this type of interactive systems. The tool herein presented, MotionDesigner, intends to assist the role of the artistic creator in the design of these systems, allowing them to have autonomy and efficiency during the creative process of their own works. The proposed tool has a design oriented to these users so that it stimulates and proliferates their work, having an extensible nature, in the way that more content may be added further in the future. The developed software was tested with dancers, choreographers and architects, revealing itself as an aid and catalyst of the creative process.Desde há duas décadas que o uso da tecnologia em projetos artísticos tem proliferado cada vez mais, como é o caso das projeções interativas baseadas em movimento utilizadas em performances e instalações artísticas. No entanto, os artistas responsáveis pela criação destes trabalhos têm, tipicamente, de recorrer a especialistas em computadores para implementar este tipo de sistemas interativos. A ferramenta apresentada nesta dissertação, o MotionDesigner, pretende auxiliar o papel do criador artístico na conceção destes sistemas, permitindo que haja autonomia e eficiência no processo criativo das suas próprias obras. A ferramenta proposta possui um design orientado para estes utilizadores de modo a estimular e agilizar a criação autónoma deste tipo de obras e tem uma natureza extensível, na medida em que mais conteúdo pode ser adicionado no futuro. O software desenvolvido foi testado com bailarinos, coreógrafos e arquitetos, revelando-se como uma ajuda e um catalisador do processo criativo da suas obras interativas
    corecore