5,882 research outputs found

    Mechanisms of memory retrieval in slow-wave sleep : memory retrieval in slow-wave sleep

    Get PDF
    Study Objectives: Memories are strengthened during sleep. The benefits of sleep for memory can be enhanced by re-exposing the sleeping brain to auditory cues; a technique known as targeted memory reactivation (TMR). Prior studies have not assessed the nature of the retrieval mechanisms underpinning TMR: the matching process between auditory stimuli encountered during sleep and previously encoded memories. We carried out two experiments to address this issue. Methods: In Experiment 1, participants associated words with verbal and non-verbal auditory stimuli before an overnight interval in which subsets of these stimuli were replayed in slow-wave sleep. We repeated this paradigm in Experiment 2 with the single difference that the gender of the verbal auditory stimuli was switched between learning and sleep. Results: In Experiment 1, forgetting of cued (vs. non-cued) associations was reduced by TMR with verbal and non-verbal cues to similar extents. In Experiment 2, TMR with identical non-verbal cues reduced forgetting of cued (vs. non-cued) associations, replicating Experiment 1. However, TMR with non-identical verbal cues reduced forgetting of both cued and non-cued associations. Conclusions: These experiments suggest that the memory effects of TMR are influenced by the acoustic overlap between stimuli delivered at training and sleep. Our findings hint at the existence of two processing routes for memory retrieval during sleep. Whereas TMR with acoustically identical cues may reactivate individual associations via simple episodic matching, TMR with non-identical verbal cues may utilise linguistic decoding mechanisms, resulting in widespread reactivation across a broad category of memories

    Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal

    Get PDF
    Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the acoustic properties of some vocalisations are constrained by physical characteristics of the caller, whereas others are more dynamic, influenced by transient states such as arousal or motivation. This chapter thus reviews how and why particular call types are produced to transmit specific types of information, and how such information may be perceived by receivers. As domestication is thought to have caused a divergence in the vocal behaviour of dogs as compared to the ancestral wolf, evidence of both dog–human and human–dog communication is considered. Overall, it is clear that domestic dogs have the potential to acoustically broadcast a range of information, which is available to conspecific and human receivers. Moreover, dogs are highly attentive to human speech and are able to extract speaker identity, emotional state, and even some types of semantic information

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Automatic Emotion Recognition from Mandarin Speech

    Get PDF

    MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

    Full text link
    Over the past few decades, multimodal emotion recognition has made remarkable progress with the development of deep learning. However, existing technologies are difficult to meet the demand for practical applications. To improve the robustness, we launch a Multimodal Emotion Recognition Challenge (MER 2023) to motivate global researchers to build innovative technologies that can further accelerate and foster research. For this year's challenge, we present three distinct sub-challenges: (1) MER-MULTI, in which participants recognize both discrete and dimensional emotions; (2) MER-NOISE, in which noise is added to test videos for modality robustness evaluation; (3) MER-SEMI, which provides large amounts of unlabeled samples for semi-supervised learning. In this paper, we test a variety of multimodal features and provide a competitive baseline for each sub-challenge. Our system achieves 77.57% on the F1 score and 0.82 on the mean squared error (MSE) for MER-MULTI, 69.82% on the F1 score and 1.12 on MSE for MER-NOISE, and 86.75% on the F1 score for MER-SEMI, respectively. Baseline code is available at https://github.com/zeroQiaoba/MER2023-Baseline

    Mental simulations in comprehension of direct versus indirect speech quotations

    Get PDF
    In human communication, direct speech (e.g., Mary said: ‘I’m hungry’) coincides with vivid paralinguistic demonstrations of the reported speech acts whereas indirect speech (e.g., Mary said [that] she was hungry) provides mere descriptions of what was said. Hence, direct speech is usually more vivid and perceptually engaging than indirect speech. This thesis explores how this vividness distinction between the two reporting styles underlies language comprehension. Using functional magnetic resonance imaging (fMRI), we found that in both silent reading and listening, direct speech elicited higher brain activity in the voice-selective areas of the auditory cortex than indirect speech, consistent with the intuition of an ‘inner voice’ experience during comprehension of direct speech. In the follow-up behavioural investigations, we demonstrated that this ‘inner voice’ experience could be characterised in terms of modulations of speaking rate, reflected in both behavioural articulation (oral reading) and eye-movement patterns (silent reading). Moreover, we observed context-concordant modulations of pitch and loudness in oral reading but not straightforwardly in silent reading. Finally, we obtained preliminary results which show that in addition to reported speakers’ voices, their facial expressions may also be encoded in silent reading of direct speech but not indirect speech. The results show that individuals are more likely to mentally simulate or imagine reported speakers’ voices and perhaps also their facial expressions during comprehension of direct as opposed to indirect speech, indicating a more vivid representation of the former. The findings are in line with the demonstration hypothesis of direct speech (Clark & Gerrig, 1990) and the embodied theories of language comprehension (e.g., Barsalou, 1999; Zwaan, 2004), suggesting that sensory experiences with pragmatically distinct reporting styles underlie language comprehension

    Effects of Orthographic, Phonologic, and Semantic Information Sources on Visual and Auditory Lexical Decision

    Get PDF
    The present study was designed to compare lexical decision latencies in visual and auditory modalities to three word types: (a) words that are inconsistent with two information sources, orthography and semantics (i.e., heterographic homophones such as bite/byte), (b) words that are inconsistent with one information source, semantics (i.e., homographic homophones such as bat), and (c) control words that are not inconsistent with any information source. Participants (N = 76) were randomly assigned to either the visual or auditory condition in which they judged the lexical status (word or nonword) of 180 words (60 heterographic homophones, 60 homographic homophones, and 60 control words) and 180 pronounceable nonsense word foils. Results differed significantly in the visual and auditory modalities. In visual lexical decision, homographic homophones were responded to faster than heterographic homophones or control words, which did not differ significantly. In auditory lexical decision, both homographic homophones and heterographic homophones were responded to faster than control words. Results are used to propose potential modifications to the Cooperative Division of Labor Model of Word Recognition (Harm & Seidenberg, 2004) to enable it to encompass both the visual and auditory modalities and account for the present results

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Bag-of-words representations for computer audition

    Get PDF
    Computer audition is omnipresent in everyday life, in applications ranging from personalised virtual agents to health care. From a technical point of view, the goal is to robustly classify the content of an audio signal in terms of a defined set of labels, such as, e.g., the acoustic scene, a medical diagnosis, or, in the case of speech, what is said or how it is said. Typical approaches employ machine learning (ML), which means that task-specific models are trained by means of examples. Despite recent successes in neural network-based end-to-end learning, taking the raw audio signal as input, models relying on hand-crafted acoustic features are still superior in some domains, especially for tasks where data is scarce. One major issue is nevertheless that a sequence of acoustic low-level descriptors (LLDs) cannot be fed directly into many ML algorithms as they require a static and fixed-length input. Moreover, also for dynamic classifiers, compressing the information of the LLDs over a temporal block by summarising them can be beneficial. However, the type of instance-level representation has a fundamental impact on the performance of the model. In this thesis, the so-called bag-of-audio-words (BoAW) representation is investigated as an alternative to the standard approach of statistical functionals. BoAW is an unsupervised method of representation learning, inspired from the bag-of-words method in natural language processing, forming a histogram of the terms present in a document. The toolkit openXBOW is introduced, enabling systematic learning and optimisation of these feature representations, unified across arbitrary modalities of numeric or symbolic descriptors. A number of experiments on BoAW are presented and discussed, focussing on a large number of potential applications and corresponding databases, ranging from emotion recognition in speech to medical diagnosis. The evaluations include a comparison of different acoustic LLD sets and configurations of the BoAW generation process. The key findings are that BoAW features are a meaningful alternative to statistical functionals, offering certain benefits, while being able to preserve the advantages of functionals, such as data-independence. Furthermore, it is shown that both representations are complementary and their fusion improves the performance of a machine listening system.Maschinelles Hören ist im täglichen Leben allgegenwärtig, mit Anwendungen, die von personalisierten virtuellen Agenten bis hin zum Gesundheitswesen reichen. Aus technischer Sicht besteht das Ziel darin, den Inhalt eines Audiosignals hinsichtlich einer Auswahl definierter Labels robust zu klassifizieren. Die Labels beschreiben bspw. die akustische Umgebung der Aufnahme, eine medizinische Diagnose oder - im Falle von Sprache - was gesagt wird oder wie es gesagt wird. Übliche Ansätze hierzu verwenden maschinelles Lernen, d.h., es werden anwendungsspezifische Modelle anhand von Beispieldaten trainiert. Trotz jüngster Erfolge beim Ende-zu-Ende-Lernen mittels neuronaler Netze, in welchen das unverarbeitete Audiosignal als Eingabe benutzt wird, sind Modelle, die auf definierten akustischen Merkmalen basieren, in manchen Bereichen weiterhin überlegen. Dies gilt im Besonderen für Einsatzzwecke, für die nur wenige Daten vorhanden sind. Allerdings besteht dabei das Problem, dass Zeitfolgen von akustischen Deskriptoren in viele Algorithmen des maschinellen Lernens nicht direkt eingespeist werden können, da diese eine statische Eingabe fester Länge benötigen. Außerdem kann es auch für dynamische (zeitabhängige) Klassifikatoren vorteilhaft sein, die Deskriptoren über ein gewisses Zeitintervall zusammenzufassen. Jedoch hat die Art der Merkmalsdarstellung einen grundlegenden Einfluss auf die Leistungsfähigkeit des Modells. In der vorliegenden Dissertation wird der sogenannte Bag-of-Audio-Words-Ansatz (BoAW) als Alternative zum Standardansatz der statistischen Funktionale untersucht. BoAW ist eine Methode des unüberwachten Lernens von Merkmalsdarstellungen, die von der Bag-of-Words-Methode in der Computerlinguistik inspiriert wurde, bei der ein Textdokument als Histogramm der vorkommenden Wörter beschrieben wird. Das Toolkit openXBOW wird vorgestellt, welches systematisches Training und Optimierung dieser Merkmalsdarstellungen - vereinheitlicht für beliebige Modalitäten mit numerischen oder symbolischen Deskriptoren - erlaubt. Es werden einige Experimente zum BoAW-Ansatz durchgeführt und diskutiert, die sich auf eine große Zahl möglicher Anwendungen und entsprechende Datensätze beziehen, von der Emotionserkennung in gesprochener Sprache bis zur medizinischen Diagnostik. Die Auswertungen beinhalten einen Vergleich verschiedener akustischer Deskriptoren und Konfigurationen der BoAW-Methode. Die wichtigsten Erkenntnisse sind, dass BoAW-Merkmalsvektoren eine geeignete Alternative zu statistischen Funktionalen darstellen, gewisse Vorzüge bieten und gleichzeitig wichtige Eigenschaften der Funktionale, wie bspw. die Datenunabhängigkeit, erhalten können. Zudem wird gezeigt, dass beide Darstellungen komplementär sind und eine Fusionierung die Leistungsfähigkeit eines Systems des maschinellen Hörens verbessert
    corecore