13,660 research outputs found

    Augev Method and an Innovative Use of Vocal Spectroscopy in Evaluating and Monitoring the Rehabilitation Path of Subjects Showing Severe Communication Pathologies

    Get PDF
    A strongly connotative element of developmental disorders (DS) is the total or partial impairment of verbal communication and, more generally, of social interaction. The method of Vocal-verb self-management (Augev) is a systemic organicistic method able to intervene in problems regarding verbal, spoken and written language development successfully. This study intends to demonstrate that it is possible to objectify these progresses through a spectrographic examination of vocal signals, which detects voice phonetic-acoustic parameters. This survey allows an objective evaluation of how effective an educational-rehabilitation intervention is. This study was performed on a population of 40 subjects (34 males and 6 females) diagnosed with developmental disorders (DS), specifically with a diagnosis of the autism spectrum disorders according to the DSM-5. The 40 subjects were treated in “la Comunicazione” centers, whose headquarters are near Bari, Brindisi and Rome. The results demonstrate a statistical significance in a correlation among the observed variables: supervisory status, attention, general dynamic coordination, understanding and execution of orders, performing simple unshielded rhythmic beats, word rhythm, oral praxies, phono-articulatory praxies, pronunciation of vowels, execution of graphemes, visual perception, acoustic perception, proprioceptive sensitivity, selective attention, short-term memory, segmental coordination, performance of simple rhythmic beatings, word rhythm, voice setting, intonation of sounds within a fifth, vowel pronunciation, consonant pronunciation, graphematic decoding, syllabic decoding, pronunciation of caudate syllables, coding of final syllable consonant, lexical decoding, phoneme-grapheme conversion, homographic grapheme decoding, homogeneous grapheme decoding, graphic stroke

    Disproportionate Frequency Representation in the Inferior Colliculus of Doppler-Compensating Greater Horseshoe Bats. Evidence for an Acoustic Fovea

    Get PDF
    1. The inferior colliculus of 8 Greater Horseshoe bats (Rhinolophus ferrumequinun) was systematically sampled with electrode penetrations covering the entire volume of the nucleus. The best frequencies and intensity thresholds for pure tones (Fig. 2) were determined for 591 neurons. The locations of the electrode penetrations within the inferior colliculus were histologically verified. 2. About 50% of all neurons encountered had best frequencies (BF) in the frequency range between 78 and 88 kHz (Table 1, Fig. 1A). Within this frequency range the BFs between 83.0 and 84.5 kHz were overrepresented with 16.3% of the total population of neurons (Fig. 1B). The frequencies of the constant frequency components of the echoes fall into this frequency range. 3. The representation of BFs expressed as number of neurons per octave shows a striking correspondence to the nonuniform innervation density in the afferent innervation of the basilar membrane (Bruns and Schmieszek, in press). The high innervation density of the basilar membrane in the frequency band between 83 and 84.5 kHz coincides with the maximum of the distribution of number of neurons per octave across frequency in the inferior colliculus (Fig. 1 C). 4. The disproportionate representation of frequencies in the auditory system of the greater horseshoe bat is described as an acoustical fovea functioning in analogy to the fovea in the visual system. The functional importance of the Doppler-shift compensation for such a foveal mechanism in the auditory system of horseshoe bats is related to that of tracking eye movements in the visual system

    Predicting Audio Advertisement Quality

    Full text link
    Online audio advertising is a particular form of advertising used abundantly in online music streaming services. In these platforms, which tend to host tens of thousands of unique audio advertisements (ads), providing high quality ads ensures a better user experience and results in longer user engagement. Therefore, the automatic assessment of these ads is an important step toward audio ads ranking and better audio ads creation. In this paper we propose one way to measure the quality of the audio ads using a proxy metric called Long Click Rate (LCR), which is defined by the amount of time a user engages with the follow-up display ad (that is shown while the audio ad is playing) divided by the impressions. We later focus on predicting the audio ad quality using only acoustic features such as harmony, rhythm, and timbre of the audio, extracted from the raw waveform. We discuss how the characteristics of the sound can be connected to concepts such as the clarity of the audio ad message, its trustworthiness, etc. Finally, we propose a new deep learning model for audio ad quality prediction, which outperforms the other discussed models trained on hand-crafted features. To the best of our knowledge, this is the first large-scale audio ad quality prediction study.Comment: WSDM '18 Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 9 page

    Respiration and Heart Rate at the Surface between Dives in Northern Elephant Seals

    Get PDF
    All underwater activities of diving mammals are constrained by the need for surface gas exchange. Our aim was to measure respiratory rate (fb) and heart rate (fh) at the surface between dives in free-ranging northern elephant seals Mirounga angustirostris. We recorded fb and fh acoustically in six translocated juveniles, 1.8-2. 4 years old, and three migrating adult males from the rookery at Ano Nuevo, California, USA. To each seal, we attached a diving instrument to record the diving pattern, a satellite tag to track movements and location, a digital audio tape recorder or acoustic datalogger with an external hydrophone to record the sounds of respiration and fh at the surface, and a VHF transmitter to facilitate recovery. During surface intervals averaging 2.2+/−0.4 min, adult males breathed a mean of 32.7+/−5.4 times at a rate of 15. 3+/−1.8 breaths min(−)(1) (means +/− s.d., N=57). Mean fh at the surface was 84+/−3 beats min(−)(1). The fb of juveniles was 26 % faster than that of adult males, averaging 19.2+/−2.2 breaths min(−)(1) for a mean total of 41.2+/−5.0 breaths during surface intervals lasting 2.6+/−0.31 min. Mean fh at the surface was 106+/−3 beats min(−)(1). fb and fh did not change significantly over the course of surface intervals. Surface fb and fh were not clearly associated with levels of exertion, such as rapid horizontal transit or apparent foraging, or with measures of immediately previous or subsequent diving performance, such as diving duration, diving depth or swimming speed. Together, surface respiration rate and the duration of the preceding dive were significant predictors of surface interval duration. This implies that elephant seals minimize surface time spent loading oxygen depending on rates of oxygen uptake and previous depletion of stores

    Electronic Dance Music in Narrative Film

    Get PDF
    As a growing number of filmmakers are moving away from the traditional model of orchestral underscoring in favor of a more contemporary approach to film sound, electronic dance music (EDM) is playing an increasingly important role in current soundtrack practice. With a focus on two specific examples, Tom Tykwer’s Run Lola Run (1998) and Darren Aronofsky’s Pi (1998), this essay discusses the possibilities that such a distinctive aesthetics brings to filmmaking, especially with regard to audiovisual rhythm and sonic integration

    Bipedal steps in the development of rhythmic behavior in humans

    No full text
    We contrast two related hypotheses of the evolution of dance: H1: Maternal bipedal walking influenced the fetal experience of sound and associated movement patterns; H2: The human transition to bipedal gait produced more isochronous/predictable locomotion sound resulting in early music-like behavior associated with the acoustic advantages conferred by moving bipedally in pace. The cadence of walking is around 120 beats per minute, similar to the tempo of dance and music. Human walking displays long-term constancies. Dyads often subconsciously synchronize steps. The major amplitude component of the step is a distinctly produced beat. Human locomotion influences, and interacts with, emotions, and passive listening to music activates brain motor areas. Across dance-genres the footwork is most often performed in time to the musical beat. Brain development is largely shaped by early sensory experience, with hearing developed from week 18 of gestation. Newborns reacts to sounds, melodies, and rhythmic poems to which they have been exposed in utero. If the sound and vibrations produced by footfalls of a walking mother are transmitted to the fetus in coordination with the cadence of the motion, a connection between isochronous sound and rhythmical movement may be developed. Rhythmical sounds of the human mother locomotion differ substantially from that of nonhuman primates, while the maternal heartbeat heard is likely to have a similar isochronous character across primates, suggesting a relatively more influential role of footfall in the development of rhythmic/musical abilities in humans. Associations of gait, music, and dance are numerous. The apparent absence of musical and rhythmic abilities in nonhuman primates, which display little bipedal locomotion, corroborates that bipedal gait may be linked to the development of rhythmic abilities in humans. Bipedal stimuli in utero may primarily boost the ontogenetic development. The acoustical advantage hypothesis proposes a mechanism in the phylogenetic development

    Structuring information through gesture and intonation

    Get PDF
    Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several “semiotic layers”, modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting

    Auditory training records for the preschool deaf and severely hard of hearing.

    Full text link
    Thesis (Ed.M.)--Boston University 2 disk recordings. In Audio-Visual Library
    • 

    corecore