721 research outputs found

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202

    Error Correction For Automotive Telematics Systems

    Get PDF
    One benefit of data communication over the voice channel of the cellular network is to reliably transmit real-time high priority data in case of life critical situations. An important implementation of this use-case is the pan-European eCall automotive standard, which has already been deployed since 2018. This is the first international standard for mobile emergency call that was adopted by multiple regions in Europe and the world. Other countries in the world are currently working on deploying a similar emergency communication system, such as in Russia and China. Moreover, many experiments and road tests are conducted yearly to validate and improve the requirements of the system. The results have proven that the requirements are unachievable thus far, with a success rate of emergency data delivery of only 70%. The eCall in-band modem transmits emergency information from the in-vehicle system (IVS) over the voice channel of the circuit switch real time communication system to the public safety answering point (PSAP) in case of a collision. The voice channel is characterized by the non-linear vocoder which is designed to compress speech waveforms. In addition, multipath fading, caused by the surrounding buildings and hills, results in severe signal distortion and causes delays in the transmission of the emergency information. Therefore, to reliably transmit data over the voice channels, the in-band modem modulates the data into speech-like (SL) waveforms, and employs a powerful forward error correcting (FEC) code to secure the real-time transmission. In this dissertation, the Turbo coded performance of the eCall in-band modem is first evaluated through the adaptive white Gaussian noise (AWGN) channel and the adaptive multi-rate (AMR) voice channel. The modulation used is biorthogonal pulse position modulation (BPPM). Simulations are conducted for both the fast and robust eCall modem. The results show that the distortion added by the vocoder is significantly large and degrades the system performance. In addition, the robust modem performs better than the fast modem. For instance, to achieve a bit error rate (BER) of 10^{-6} using the AMR compression rate of 7.4 kbps, the signal-to-noise ratio (SNR) required is 5.5 dB for the robust modem while a SNR of 7.5 dB is required for the fast modem. On the other hand, the fading effect is studied in the eCall channel. It was shown that the fading distribution does not follow a Rayleigh distribution. The performance of the in-band modem is evaluated through the AWGN, AMR and fading channel. The results are compared with a Rayleigh fading channel. The analysis shows that strong fading still exists in the voice channel after power control. The results explain the large delays and failure of the emergency data transmission to the PSAP. Thus, the eCall standard needs to re-evaluate their requirements in order to consider the impact of fading on the transmission of the modulated signals. The results can be directly applied to design real-time emergency communication systems, including modulation and coding

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Speech spectrum non-stationarity detection based on line spectrum frequencies and related applications

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and The Institute of Engineering and Sciences of Bilkent University, 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 124-132In this thesis, two new speech variation measures for speech spectrum nonstationarity detection are proposed. These measures are based on the Line Spectrum Frequencies (LSF) and the spectral values at the LSF locations. They are formulated to be subjectively meaningful, mathematically tractable, and also have low computational complexity property. In order to demonstrate the usefulness of the non-stationarity detector, two applications are presented: The first application is an implicit speech segmentation system which detects non-stationary regions in speech signal and obtains the boundaries of the speech segments. The other application is a Variable Bit-Rate Mixed Excitation Linear Predictive (VBR-MELP) vocoder utilizing a novel voice activity detector to detect silent regions in the speech. This voice activity detector is designed to be robust to non-stationary background noise and provides efficient coding of silent sections and unvoiced utterances to decrease the bit-rate. Simulation results are also presented.Ertan, Ali ErdemM.S

    Evaluation of room acoustic qualities and defects by use of auralization

    Get PDF

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Eyebrow raising in dialogue: discourse structure, utterance function, and pitch accents

    Get PDF
    Some studies have suggested a relationship between eyebrow raising and different aspects of the verbal message, but our knowledge about this link is still very limited. If we could establish and characterise a relation between eyebrow raises and the linguistic signal we could better understand human multimodal communication behaviour. We could also improve the credibility and efficiency of computer animated conversational agents in multimodal communication systems.This thesis investigated eyebrow raising in a corpus of task-oriented English dialogues. Applying a standard dialogue coding scheme (Conversational Game Analysis, Carletta et al., 1997), eyebrow raises were studied in connection with discourse structure and utterance function. Supporting the prediction, more frequent and longer eyebrow raising occurred in the initial utterance of highlevel discourse segments than anywhere else in the dialogue (where 'high-level discourse segment' = transaction, and 'utterance' = move, following Carletta et al.). Additionally, eyebrow raises were more frequent in instructions than in requests for or acknowledgements of information. Instructions also had longer eyebrow raising than any other type of utterance. Contrary to the prediction, the start of a lower-level discourse segment (conversational game) did not have more eyebrow raising than any other position in the dialogue, and queries did not have more eyebrow raising than any other type of utterance.Eyebrow raises were also studied in relation to intonational events, namely pitch accents. Results showed evidence of alignment between the brow raise start and the start of a pitch accent. Most pitch accents were not associated with brow raising, but when brow raises occurred they tended to immediately precede a pitch accent on the speech signal. To investigate what could explain the alignment between the two events, pitch accents aligned with eyebrow raises were compared to all other pitch accents in terms of: phonological characteristics (primary vs. secondary pitch accents, and downstep-initial vs. non-initial pitch accents), information structure (given vs. new information in referring expressions, and the last quarter vs. earlier parts of the utterance length) and type of utterance in which they occurred (instruction vs. non-instruction). Those comparisons suggested that brow raises may be aligned more frequently with pitch accents in downstepinitial position and in instructions. No differences were found in terms of information structure or between primary/secondary accents.The results provide evidence of a link between eyebrow raising and spoken language. Eyebrow raises may signal the start of linguistic units such as discourse segments and some prosodic phenomena, they may be related to utterance function, and they are aligned with pitch accents. Possible linguistic functions are proposed, such as structuring and emphasising information in the verbal message

    A comparative developmental approach to multimodal communication in chimpanzees (Pan troglodytes)

    Get PDF
    Studying how communication of our closest relatives, the great-apes, develops can inform our understanding of the socio-ecological drivers shaping language evolution. However, despite a now recognized ability of great apes to produce multimodal signal combinations, a key feature of human language, we lack knowledge about when or how this ability manifests throughout ontogeny. In this thesis, I aimed to address this issue by examining the development of multimodal signal combinations (also referred to as multimodal combinations) in chimpanzees. To establish an ontogenetic trajectory of combinatorial signalling, my first empirical study examined age and context related variation in the production of multimodal combinations in relation to unimodal signals. Results showed that older individuals used multimodal combinations at significantly higher frequencies than younger individuals although the unimodal signalling remained dominant. In addition, I found a strong influence of playful and aggressive contexts on multimodal communication, supporting previous suggestions that combinations function to disambiguate messages in high-stakes interactions. Subsequently, I looked at influences in the social environment which may contribute to patterns of communication development. I turned first to the mother-infant relationship which characterises early infancy before moving onto interactive behaviour in the wider social environment and the role of multimodal combinations in communicative interactions. Results indicate that mothers support the development of communicative signalling in their infants, transitioning from more action-based to signalling behaviours with infant age. Furthermore, mothers responded more to communicative signals than physical actions overall, which may help young chimpanzees develop effective communication skills. Within the wider community, I found that interacting with a wider number of individuals positively influenced multimodal combination production. Moreover, in contrast to the literature surrounding unimodal signals, these multimodal signals appeared highly contextually specific. Finally, I found that within communicative interactions, young chimpanzees showed increasing awareness of recipient visual orientation with age, producing multimodal combinations most often when the holistic signal could be received. Moreover, multimodal combinations were more effective in soliciting recipient responses and satisfactory interactional outcomes irrespective of age. Overall, these findings highlight the relevance of studying ape communication development from a multimodal perspective and provide new evidence of developmental patterns that echo those seen in humans, while simultaneously highlighting important species differences. Multimodal communication development appears to be influenced by varying socio-environmental factors including the context and patterns of communicative interaction
    corecore