78,392 research outputs found

    A REAL TIME GENERAL PURPOSE SIGNAL PROCESSOR ARCHITECTURE.

    Get PDF
    Digital signal processing has many applications in the areas of signal, radar, speech and image processing and real time implementation requires a very high throughput rate. Various processor architectures have been investigated 1-6 which reveals that a special purpose processor appropriate to the algorithms should be designed in order to achieve high throughput rates. Recently research efforts 7-14 are directed towards the exploitation of parallelism in the algorithms and parallel computation of these algorithms. The objective of this work is to propose new concepts for high speed computation of real time general purpose signal processing algorithms. A novel architecture of a real time general purpose data flow signal processor (DFSP), based on the binary tree structure, is proposed for real time signal processing applications. The data flow signal processor exploits distributed, parallel, and pipeline processing approaches to achieve high throughput rates. The processor utilizes the residue number system (RNS) for high speed signal processing applications. The arithmetic operations in RNS can be performed via Random Access Memory (RAM) look up tables, and the execution time of any particular arithmetic operation is reduced to the access time of a RAM. The data flow signal processor is demonstrated to be suitable for performing recursive, non-recursive digital filtering and convolution operations. Finally the thesis describes various alternatives for programming the DFSP, and an interactive program environment is used to write application programs without knowing the internal architecture of the DFSP.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1985 .J353. Source: Dissertation Abstracts International, Volume: 46-02, Section: B, page: 0601. Thesis (Ph.D.)--University of Windsor (Canada), 1985

    Biometrics for Emotion Detection (BED): Exploring the combination of Speech and ECG

    Get PDF
    The paradigm Biometrics for Emotion Detection (BED) is introduced, which enables unobtrusive emotion recognition, taking into account varying environments. It uses the electrocardiogram (ECG) and speech, as a powerful but rarely used combination to unravel people’s emotions. BED was applied in two environments (i.e., office and home-like) in which 40 people watched 6 film scenes. It is shown that both heart rate variability (derived from the ECG) and, when people’s gender is taken into account, the standard deviation of the fundamental frequency of speech indicate people’s experienced emotions. As such, these measures validate each other. Moreover, it is found that people’s environment can indeed of influence experienced emotions. These results indicate that BED might become an important paradigm for unobtrusive emotion detection

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    Ubiquitous emotion-aware computing

    Get PDF
    Emotions are a crucial element for personal and ubiquitous computing. What to sense and how to sense it, however, remain a challenge. This study explores the rare combination of speech, electrocardiogram, and a revised Self-Assessment Mannequin to assess people’s emotions. 40 people watched 30 International Affective Picture System pictures in either an office or a living-room environment. Additionally, their personality traits neuroticism and extroversion and demographic information (i.e., gender, nationality, and level of education) were recorded. The resulting data were analyzed using both basic emotion categories and the valence--arousal model, which enabled a comparison between both representations. The combination of heart rate variability and three speech measures (i.e., variability of the fundamental frequency of pitch (F0), intensity, and energy) explained 90% (p < .001) of the participants’ experienced valence--arousal, with 88% for valence and 99% for arousal (ps < .001). The six basic emotions could also be discriminated (p < .001), although the explained variance was much lower: 18–20%. Environment (or context), the personality trait neuroticism, and gender proved to be useful when a nuanced assessment of people’s emotions was needed. Taken together, this study provides a significant leap toward robust, generic, and ubiquitous emotion-aware computing
    corecore