87 research outputs found

    Pulsed Melodic Affective Processing: Musical structures for increasing transparency in emotional computation

    Get PDF
    Pulsed Melodic Affective Processing (PMAP) is a method for the processing of artificial emotions in affective computing. PMAP is a data stream designed to be listened to, as well as computed with. The affective state is represented by numbers that are analogues of musical features, rather than by a binary stream. Previous affective computation has been done with emotion category indices, or real numbers representing various emotional dimensions. PMAP data can be generated directly by sound (e.g. heart rates or key-press speeds) and turned directly into music with minimal transformation. This is because PMAP data is music and computations done with PMAP data are computations done with music. This is important because PMAP is constructed so that the emotion that its data represents at the computational level will be similar to the emotion that a person “listening” to the PMAP melody hears. Thus, PMAP can be used to calculate “feelings” and the result data will “sound like” the feelings calculated. PMAP can be compared to neural spike streams, but ones in which pulse heights and rates encode affective information. This paper illustrates PMAP in a range of simulations. In a multi-agent simulation, initial results support that an affective multi-robot security system could use PMAP to provide a basic control mechanism for “search-and-destroy”. Results of fitting a musical neural network with gradient descent to help solve a text emotional detection problem are also presented. The paper concludes by discussing how PMAP may be applicable in the stock markets, using a simplified order book simulation. © 2014, The Society for Modeling and Simulation International. All rights reserved

    Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

    Get PDF
    Part 1: Fundamental IssuesInternational audienceThis paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool

    Elucidating musical structure through empirical measurement of performance parameters

    Get PDF
    The differences between a musical score and an instance of that music in a performance, communicates a performer’s view of the information contained in that score. The main hypothesis in this thesis is that by measuring quantifiable parameters such as tempo, dynamics and motion from live performance, the performer’s interpretation of musical structure can be detected. This will be tested for pieces for which the structure is explicit and obvious, and then used to discover musical structure from looking at patterns of aural and visual performance parameters in performances of more ambiguously structured pieces. This thesis is in two strands. The first part covers the acquisition of multi-modal parameters in piano performance. This will explore current technologies in acquiring MIDI information such as accurate onset timings and key velocities as well as motion tracking systems for measuring general body movements. A new cheap, portable and accurate system for tracking the intricacies of pianists’ finger movement is described as well as methods and tools available for analysis and visualisation of musical data. The second strand of this thesis will explore uses of these capture systems in empirically measuring performance parameters to elucidate musical structure. Two experiments follow which test the hypothesis of detecting musical structure from parameters such as tempo, dynamics and movement, before using these patterns as a basis for discovering structure in performances of the finale of Chopin’s B flat minor sonata. Body movement is discovered as an indicator of phrasing boundaries, which when combined with the measured aural parameters provides interpretations of the performed music. Phrasing boundaries are identified correctly for the control piece (Chopin’s Prelude in A major Op.28, No.7) and consequently for the first test piece (Chopin’s Prelude in B minor Op.28 No.6). The proceeding experiment identifies performers’ style of phrase endings through performances of the control piece and tests them against patterns found in the second test piece (Chopin’s B Flat minor Sonata Finale). Five out of the six performers confirm the musicological hypothesis that bar 5 is not the entry of a new theme but the continuation of the the theme beginning in bar 1

    Machine Vision: How Algorithms are Changing the Way We See the World

    Get PDF
    Humans have used technology to expand our limited vision for millennia, from the invention of the stone mirror 8,000 years ago to the latest developments in facial recognition and augmented reality. We imagine that technologies will allow us to see more, to see differently and even to see everything. But each of these new ways of seeing carries its own blind spots. In this illuminating book, Jill Walker Rettberg examines the long history of machine vision. Providing an overview of the historical and contemporary uses of machine vision, she unpacks how technologies such as smart surveillance cameras and TikTok filters are changing the way we see the world and one another. By analysing fictional and real-world examples, including art, video games and science fiction, the book shows how machine vision can have very different cultural impacts, fostering both sympathy and community as well as anxiety and fear. Combining ethnographic and critical media studies approaches alongside personal reflections, Machine Vision is an engaging and eye-opening read. It is suitable for students and scholars of digital media studies, science and technology studies, visual studies, digital art and science fiction, as well as for general readers interested in the impact of new technologies on society.publishedVersio

    Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media

    Get PDF

    Extending the Predictive Capabilities of Hand-oriented Behavioural Biometric Systems

    Get PDF
    The discipline of biometrics may be broadly defined as the study of using metrics related to human characteristics as a basis for individual identification and authentication, and many approaches have been implemented in recent years for many different scenarios. A sub-section of biometrics, specifically known as soft biometrics, has also been developing rapidly, which focuses on the additional use of information which is characteristic of a user but not unique to one person, examples including subject age or gender. Other than its established value in identification and authentication tasks, such useful user information can also be predicted within soft biometrics modalities. Furthermore, some most recent investigations have demonstrated a demand for utilising these biometric modalities to extract even higher-level user information, such as a subject\textsc{\char13}s mental or emotional state. The study reported in this thesis will focus on investigating two soft biometrics modalities, namely keystroke dynamics and handwriting biometrics (both examples of hand-based biometrics, but with differing characteristics). The study primarily investigates the extent to which these modalities can be used to predict human emotions. A rigorously designed data capture protocol is described and a large and entirely new database is thereby collected, significantly expanding the scale of the databases available for this type of study compared to those reported in the literature. A systematic study of the predictive performance achievable using the data acquired is presented. The core analysis of this study, which is to further explore of the predictive capability of both handwriting and keystroke data, confirm that both modalities have the capability for predicting higher level mental states of individuals. This study also presents the implementation of detailed experiments to investigate in detail some key issues (such as amount of data available, availability of different feature types, and the way ground truth labelling is established) which can enhance the robustness of this higher level state prediction technique

    Engineering a Better Future

    Get PDF
    This open access book examines how the social sciences can be integrated into the praxis of engineering and science, presenting unique perspectives on the interplay between engineering and social science. Motivated by the report by the Commission on Humanities and Social Sciences of the American Association of Arts and Sciences, which emphasizes the importance of social sciences and Humanities in technical fields, the essays and papers collected in this book were presented at the NSF-funded workshop ‘Engineering a Better Future: Interplay between Engineering, Social Sciences and Innovation’, which brought together a singular collection of people, topics and disciplines. The book is split into three parts: A. Meeting at the Middle: Challenges to educating at the boundaries covers experiments in combining engineering education and the social sciences; B. Engineers Shaping Human Affairs: Investigating the interaction between social sciences and engineering, including the cult of innovation, politics of engineering, engineering design and future of societies; and C. Engineering the Engineers: Investigates thinking about design with papers on the art and science of science and engineering practice

    A Rigging Convention for Isosurface-Based Characters

    Get PDF
    This thesis presents a prototype system for generating animation control systems for isosurface-based characters that blurs the distinction between a skeletal rig and a particle system. Managing articulation and deformation set-up can be challenging for amorphous characters whose surface shape is defined at render time and can only be viewed as an approximation during the process of defining an animation performance. This prototype system utilizes conventional scripted techniques for defining animation control systems integrated with a graphical user interface that provides art directable control over surface contour, shape and silhouette for isosurface-based characters. Once animated, these characters can be rendered using Rendermans RIBlobby implementation and provide visual feedback of fluid motion tests. The prototype system fits naturally within common practices in digital character setup and provides the animator control over isosurface-based characters
    corecore