254 research outputs found

    Entropy on Spin Factors

    Full text link
    Recently it has been demonstrated that the Shannon entropy or the von Neuman entropy are the only entropy functions that generate a local Bregman divergences as long as the state space has rank 3 or higher. In this paper we will study the properties of Bregman divergences for convex bodies of rank 2. The two most important convex bodies of rank 2 can be identified with the bit and the qubit. We demonstrate that if a convex body of rank 2 has a Bregman divergence that satisfies sufficiency then the convex body is spectral and if the Bregman divergence is monotone then the convex body has the shape of a ball. A ball can be represented as the state space of a spin factor, which is the most simple type of Jordan algebra. We also study the existence of recovery maps for Bregman divergences on spin factors. In general the convex bodies of rank 2 appear as faces of state spaces of higher rank. Therefore our results give strong restrictions on which convex bodies could be the state space of a physical system with a well-behaved entropy function.Comment: 30 pages, 6 figure

    Relative Pitch Perception and the Detection of Deviant Tone Patterns.

    Get PDF
    Most people are able to recognise familiar tunes even when played in a different key. It is assumed that this depends on a general capacity for relative pitch perception; the ability to recognise the pattern of inter-note intervals that characterises the tune. However, when healthy adults are required to detect rare deviant melodic patterns in a sequence of randomly transposed standard patterns they perform close to chance. Musically experienced participants perform better than naïve participants, but even they find the task difficult, despite the fact that musical education includes training in interval recognition.To understand the source of this difficulty we designed an experiment to explore the relative influence of the size of within-pattern intervals and between-pattern transpositions on detecting deviant melodic patterns. We found that task difficulty increases when patterns contain large intervals (5-7 semitones) rather than small intervals (1-3 semitones). While task difficulty increases substantially when transpositions are introduced, the effect of transposition size (large vs small) is weaker. Increasing the range of permissible intervals to be used also makes the task more difficult. Furthermore, providing an initial exact repetition followed by subsequent transpositions does not improve performance. Although musical training correlates with task performance, we find no evidence that violations to musical intervals important in Western music (i.e. the perfect fifth or fourth) are more easily detected. In summary, relative pitch perception does not appear to be conducive to simple explanations based exclusively on invariant physical ratios

    The Speed of Smell: Odor-Object Segregation within Milliseconds

    Get PDF
    Segregating objects from background, and determining which of many concurrent stimuli belong to the same object, remains one of the most challenging unsolved problems both in neuroscience and in technical applications. While this phenomenon has been investigated in depth in vision and audition it has hardly been investigated in olfaction. We found that for honeybees a 6-ms temporal difference in stimulus coherence is sufficient for odor-object segregation, showing that the temporal resolution of the olfactory system is much faster than previously thought

    Auditory grouping occurs prior to intersensory pairing: evidence from temporal ventriloquism

    Get PDF
    The authors examined how principles of auditory grouping relate to intersensory pairing. Two sounds that normally enhance sensitivity on a visual temporal order judgement task (i.e. temporal ventriloquism) were embedded in a sequence of flanker sounds which either had the same or different frequency (Exp. 1), rhythm (Exp. 2), or location (Exp. 3). In all experiments, we found that temporal ventriloquism only occurred when the two capture sounds differed from the flankers, demonstrating that grouping of the sounds in the auditory stream took priority over intersensory pairing. By combining principles of auditory grouping with intersensory pairing, we demonstrate that capture sounds were, counter-intuitively, more effective when their locations differed from that of the lights rather than when they came from the same position as the lights

    Active Learning for Auditory Hierarchy

    Get PDF
    Much audio content today is rendered as a static stereo mix: fundamentally a fixed single entity. Object-based audio envisages the delivery of sound content using a collection of individual sound ‘objects’ controlled by accompanying metadata. This offers potential for audio to be delivered in a dynamic manner providing enhanced audio for consumers. One example of such treatment is the concept of applying varying levels of data compression to sound objects thereby reducing the volume of data to be transmitted in limited bandwidth situations. This application motivates the ability to accurately classify objects in terms of their ‘hierarchy’. That is, whether or not an object is a foreground sound, which should be reproduced at full quality if possible, or a background sound, which can be heavily compressed without causing a deterioration in the listening experience. Lack of suitably labelled data is an acknowledged problem in the domain. Active Learning is a method that can greatly reduce the manual effort required to label a large corpus by identifying the most effective instances to train a model to high accuracy levels. This paper compares a number of Active Learning methods to investigate which is most effective in the context of a hierarchical labelling task on an audio dataset. Results show that the number of manual labels required can be reduced to 1.7% of the total dataset while still retaining high prediction accuracy

    Combination of Spectral and Binaurally Created Harmonics in a Common Central Pitch Processor

    Get PDF
    A fundamental attribute of human hearing is the ability to extract a residue pitch from harmonic complex sounds such as those produced by musical instruments and the human voice. However, the neural mechanisms that underlie this processing are unclear, as are the locations of these mechanisms in the auditory pathway. The ability to extract a residue pitch corresponding to the fundamental frequency from individual harmonics, even when the fundamental component is absent, has been demonstrated separately for conventional pitches and for Huggins pitch (HP), a stimulus without monaural pitch information. HP is created by presenting the same wideband noise to both ears, except for a narrowband frequency region where the noise is decorrelated across the two ears. The present study investigated whether residue pitch can be derived by combining a component derived solely from binaural interaction (HP) with a spectral component for which no binaural processing is required. Fifteen listeners indicated which of two sequentially presented sounds was higher in pitch. Each sound consisted of two “harmonics,” which independently could be either a spectral or a HP component. Component frequencies were chosen such that the relative pitch judgement revealed whether a residue pitch was heard or not. The results showed that listeners were equally likely to perceive a residue pitch when one component was dichotic and the other was spectral as when the components were both spectral or both dichotic. This suggests that there exists a single mechanism for the derivation of residue pitch from binaurally created components and from spectral components, and that this mechanism operates at or after the level of the dorsal nucleus of the lateral lemniscus (brainstem) or the inferior colliculus (midbrain), which receive inputs from the medial superior olive where temporal information from the two ears is first combined

    Reliability of Eye Tracking and Pupillometry Measures in Individuals with Fragile X Syndrome

    Get PDF
    Recent insight into the underlying molecular and cellular mechanisms of fragile X syndrome (FXS) has led to the proposal and development of new pharmaceutical treatment strategies, and the initiation of clinical trials aimed at correcting core symptoms of the developmental disorder. Consequently, there is an urgent and critical need for outcome measures that are valid for quantifying specific symptoms of FXS and that are consistent across time. We used eye tracking to evaluate test–retest reliability of gaze and pupillometry measures in individuals with FXS and we demonstrate that these measures are viable options for assessing treatment-specific outcomes related to a core behavioral feature of the disorder

    Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation

    Get PDF
    Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing
    corecore