113 research outputs found

    On the simulation of interactive non-verbal behaviour in virtual humans

    Get PDF
    Development of virtual humans has focused mainly in two broad areas - conversational agents and computer game characters. Computer game characters have traditionally been action-oriented - focused on the game-play - and conversational agents have been focused on sensible/intelligent conversation. While virtual humans have incorporated some form of non-verbal behaviour, this has been quite limited and more importantly not connected or connected very loosely with the behaviour of a real human interacting with the virtual human - due to a lack of sensor data and no system to respond to that data. The interactional aspect of non-verbal behaviour is highly important in human-human interactions and previous research has demonstrated that people treat media (and therefore virtual humans) as real people, and so interactive non-verbal behaviour is also important in the development of virtual humans. This paper presents the challenges in creating virtual humans that are non-verbally interactive and drawing corollaries with the development history of control systems in robotics presents some approaches to solving these challenges - specifically using behaviour based systems - and shows how an order of magnitude increase in response time of virtual humans in conversation can be obtained and that the development of rapidly responding non-verbal behaviours can start with just a few behaviours with more behaviours added without difficulty later in development

    Studies on user control in ambient intelligent systems

    Get PDF
    People have a deeply rooted need to experience control and be effective in interactions with their environments. At present times, we are surrounded by intelligent systems that take decisions and perform actions for us. This should make life easier, but there is a risk that users experience less control and reject the system. The central question in this thesis is whether we can design intelligent systems that have a degree of autonomy, while users maintain a sense of control. We try to achieve this by giving the intelligent system an 'expressive interface’: the part that provides information to the user about the internal state, intentions and actions of the system. We examine this question both in the home and the work environment.We find the notion of a ‘system personality’ useful as a guiding principle for designing interactions with intelligent systems, for domestic robots as well as in building automation. Although the desired system personality varies per application, in both domains a recognizable system personality can be designed through expressive interfaces using motion, light, sound, and social cues. The various studies show that the level of automation and the expressive interface can influence the perceived system personality, the perceived level of control, and user’s satisfaction with the system. This thesis shows the potential of the expressive interface as an instrument to help users understand what is going on inside the system and to experience control, which might be essential for the successful adoption of the intelligent systems of the future.<br/

    Contextualizing musical organics: an ad-hoc organological classification approach

    Get PDF
    As a research field, NIME is characterised by a plethora of design approaches, hardware, and software technologies. Formed of an interdisciplinary research community with divergent end-goals, the diversity of aims, objectives, methods, and outcomes is striking. Ranging from expressive interfaces, to musicological concerns, novel sensor technologies, and artificial creativity, the research presented is heterogeneous, distinct, and original. The design of digital instruments is very different from the making of acoustic instruments, due to the bespoke traditions and production environments of the disciplines mentioned above, but notably also because of the heightened epistemic dimension inscribed in the materiality of digital systems. These new materialities are often hardware and software technologies manufactured for purposes other than music. Without having to support established traditions and relationships between the instrument maker and the performer or composer, new digital musical instruments often develop at the speed of the computer’s technical culture, as opposed to the slower evolution of more culturally engrained acoustic instrument design

    A pseudo-medium-wide 8-competitive interface for two-level compositional real-time scheduling of constrained- deadline sporadic tasks on a uniprocessor

    Get PDF
    Compositional real-time scheduling clearly requires that ”normal” real-time scheduling challenges are addressed but challenges intrinsic to compositionality must be addressed as well, in particular: (i) how should interfaces be described? and (ii) how should numerical values be assigned to parameters constituting the interfaces? The real-time systems community has traditionally used narrow interfaces for describing a component (for example, a utilization/bandwidthlike metric and the distribution of this bandwidth in time). In this paper, we introduce the concept of competitive ratio of an interface and show that typical narrow interfaces cause poor performance for scheduling constrained-deadline sporadic tasks (competitive ratio is infinite). Therefore, we explore more expressive interfaces; in particular a class called medium-wide interfaces. For this class, we propose an interface type and show how the parameters of the interface should be selected. We also prove that this interface is 8-competitive

    A landscape of design: interaction, interpretation and the development of experimental expressive interfaces

    Get PDF
    This paper presents the initial research insights of an ongoing research project that focuses upon understanding the role of landscape, its use as a resource for designing interfaces for musical expression, and as a tool for leveraging ethnographic understandings about space, place, design and musical expression. We briefly discuss the emerging research and reasoning behind our approach, the site that we are focusing on, our participatory methodology and conceptual designs. This innovative research is envisaged as something that can engage and interest the conference participants, encourage debate and act as an exploratory platform, which will in turn inform our research, practice and design

    Tutorial : the power of software defined radio in wireless innovation

    Get PDF

    Making music through real-time voice timbre analysis: machine learning and timbral control

    Get PDF
    PhDPeople can achieve rich musical expression through vocal sound { see for example human beatboxing, which achieves a wide timbral variety through a range of extended techniques. Yet the vocal modality is under-exploited as a controller for music systems. If we can analyse a vocal performance suitably in real time, then this information could be used to create voice-based interfaces with the potential for intuitive and ful lling levels of expressive control. Conversely, many modern techniques for music synthesis do not imply any particular interface. Should a given parameter be controlled via a MIDI keyboard, or a slider/fader, or a rotary dial? Automatic vocal analysis could provide a fruitful basis for expressive interfaces to such electronic musical instruments. The principal questions in applying vocal-based control are how to extract musically meaningful information from the voice signal in real time, and how to convert that information suitably into control data. In this thesis we address these questions, with a focus on timbral control, and in particular we develop approaches that can be used with a wide variety of musical instruments by applying machine learning techniques to automatically derive the mappings between expressive audio input and control output. The vocal audio signal is construed to include a broad range of expression, in particular encompassing the extended techniques used in human beatboxing. The central contribution of this work is the application of supervised and unsupervised machine learning techniques to automatically map vocal timbre to synthesiser timbre and controls. Component contributions include a delayed decision-making strategy for low-latency sound classi cation, a regression-tree method to learn associations between regions of two unlabelled datasets, a fast estimator of multidimensional di erential entropy and a qualitative method for evaluating musical interfaces based on discourse analysis

    Real-time expressive internet communications

    Get PDF
    This research work "Real-time Expressive Internet Communications" focuses on two subjects: One is the investigation of methods of automatic emotion detection and visualisation under real-time Internet communication environment, the other is the analysis of the influences of presenting visualised emotion expressivei mages to Internet users. To detect emotion within Internet communication, the emotion communication process over the Internet needs to be examined. An emotion momentum theory was developed to illustrate the emotion communication process over the Internet communication. It is argued in this theory that an Internet user is within a certain emotion state, the emotion state is changeable by internal and external stimulus (e.g. a received chat message) and time; stimulus duration and stimulus intensity are the major factors influencing the emotion state. The emotion momentum theory divides the emotions expressed in Internet communication into three dimensions: emotion category, intensity and duration. The emotion momentum theory was implemented within a prototype emotion extraction engine. The emotion extraction engine can analyse input text in an Internet chat environment, detect and extract the emotion being communicated, and deliver the parameters to invoke an appropriate expressive image on screen to the every communicating user's display. A set of experiments were carried out to test the speed and the accuracy of the emotion extraction engine. The results of the experiments demonstrated an acceptable performance of the emotion extraction engine. The next step of this study was to design and implement an expressive image generator that generates expressive images from a single neutral facial image. Generated facial images are classified into six categories, and for each category, three different intensities were achieved. Users need to define only six control points and three control shapes to synthesise all the expressive images and a set of experiments were carried out to test the quality of the synthesised images. The experiment results demonstrated an acceptable recognition rate of the generated facial expression images. With the emotion extraction engine and the expressive image generator,a test platform was created to evaluate the influences of emotion visualisation in the Internet communication context. The results of a series of experiments demonstratedthat emotion visualisation can enhancethe users' perceived performance and their satisfaction with the interfaces. The contributions to knowledge fall into four main areas; firstly, the emotion momentum theory that is proposed to illustrate the emotion communication process over the Internet; secondly, the innovations built into an emotion extraction engine, which senses emotional feelings from textual messages input by Internet users; thirdly, the innovations built into the expressive image generator, which synthesises facial expressions using a fast approach with a user friendly interface; and fourthly, the identification of the influence that the visualisation of emotion has on human computer interaction
    corecore