96 research outputs found

    The Effect of Visual Cues on Auditory Stream Segregation in Musicians and Non-Musicians

    Get PDF
    Background: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. Methods: Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. Conclusions: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cue

    Corrigendum to: “Measurement of the tt ̄ production cross-section using eμ events with b-tagged jets in pp collisions at √s = 13 TeV with the ATLAS detector” [Phys. Lett. B 761 (2016) 136–157]

    Get PDF
    This paper describes a measurement of the inclusive top quark pair production cross-section (sigma(t (t) over bar)) with a data sample of 3.2fb(-1)of proton-proton collisions at a centre-of-mass energy of root s= 13TeV, collected in 2015 by the ATLAS detector at the LHC. This measurement uses events with an opposite-charge electron-muon pair in the final state. Jets containing b-quarks are tagged using an algorithm based on track impact parameters and reconstructed secondary vertices. The numbers of events with exactly one and exactly two b-tagged jets are counted and used to determine simultaneously sigma(t (t) over bar) and the efficiency to reconstruct and b-tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section is measured to be:sigma(t (t) over bar) = 818 +/- 8 (stat) +/- 27 (syst) +/- 19 (lumi) +/- 12 (beam) pb,where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the integrated luminosity and the LHC beam energy, giving a total relative uncertainty of 4.4%. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. A fiducial measurement corresponding to the experimental acceptance of the leptons is also presented

    Teilimplantierbare aktive Mittelohrimplantate

    No full text

    Armenischer Zahlentest zur Messung der Sprachverstehensschwelle in Ruhe: Evaluation und Generierung von Referenzdaten

    No full text
    Introduction: The aim of the study was to evaluate a recently developed Armenian speech audiometric test. It consists of twenty test lists, each containing 20 phonemically balanced, familiar, and homogeneous Armenian multisyllabic numbers. Reference thresholds for speech recognition in quiet (SRTs) for native Armenian speakers were determined.Materials and methods: Digitally recorded Armenian speech material was evaluated by 25 native Armenian speakers with normal hearing. Individual speech discrimination functions were measured for all 20 lists. A logistic function was fitted to the individual speech discrimination functions and the averaged results. The sound pressure level at the inflection point, i.e., the level at 50% speech intelligibility, was defined as SRT. Results: The mean SRT across all test lists and subjects was 19.3 dB SPL. The measured individual SRTs varied between subjects in a range of 7.3 dB. Very steep slopes of the individual and averaged speech intelligibility functions were observed, ranging from 16 to 29 %/dB. SRTs and slopes did not differ significantly between test lists.Conclusion: The homogeneity of the test lists and thus of the speech test was demonstrated. The measured SRT can be used as reference data for further application in routine clinical measurements and thus improve the validity of clinical procedures for native Armenian speakers.Einleitung: Ziel dieser Untersuchung war es, einen kürzlich entwickelten armenischen sprachaudiometrischen Test zu evaluieren. Dieser besteht aus zwanzig Testlisten mit je 20 phonemisch ausgewogenen mehrsilbigen armenischen Zahlen. Referenzwerte für die Sprachverstehensschwelle (SVS) wurden für armenische Muttersprachler bestimmt.Methoden: Das digital aufgezeichnete armenische Sprachmaterial wurde 25 normalhörenden armenischen Muttersprachlern präsentiert. Für alle 20 Listen wurden die individuellen Sprachdiskriminationsfunktionen gemessen. An diese und die über die Testlisten und Probanden gemittelten Ergebnisse wurden logistische Funktionen angepasst. Als SVS wurde der Schalldruckpegel am Wendepunkt definiert, also der Schalldruckpegel bei einer Sprachverständlichkeit von 50%.Ergebnisse: Die SVS über alle Testlisten und Probanden hinweg betrug 19,3 dB SPL. Die individuelle SVS variierte zwischen den Probanden in einem Bereich von 7,3 dB. Im Wendepunkt wurden sehr steile Anstiege der individuellen und gemittelten Sprachdiskriminationsfunktionen im Bereich von 16 bis 29 %/dB beobachtet. Die SVS und Steigungen bei der SVS unterschieden sich zwischen den Testlisten nicht signifikant.Schlussfolgerung: Die Homogenität der Testlisten und damit des Sprachverständlichkeitstest konnten gezeigt werden. Die gemessenen SVS können als Referenzdaten für die weitere Anwendung in klinischen Routinemessungen verwendet werden und somit die Validität der audiometrischen Testverfahren in armenischer Sprache verbessern

    Auditory Brainstem And Cortical Potentials Following Bone-Anchored Hearing Aid Stimulation

    No full text

    Erste Erfahrungen mit der Stapesprothese NiTiFLEX®

    No full text

    Entwicklung und Evaluation eines Deep Learning-Algorithmus für die Worterkennung aus Lippenbewegungen für die deutsche Sprache

    No full text
    BACKGROUND: When reading lips, many people benefit from additional visual information from the lip movements of the speaker, which is, however, very error prone. Algorithms for lip reading with artificial intelligence based on artificial neural networks significantly improve word recognition but are not available for the German language. MATERIALS AND METHODS: A total of 1806 videoclips with only one German-speaking person each were selected, split into word segments, and assigned to word classes using speech-recognition software. In 38,391 video segments with 32 speakers, 18 polysyllabic, visually distinguishable words were used to train and validate a neural network. The 3D Convolutional Neural Network and Gated Recurrent Units models and a combination of both models (GRUConv) were compared, as were different image sections and color spaces of the videos. The accuracy was determined in 5000 training epochs. RESULTS: Comparison of the color spaces did not reveal any relevant different correct classification rates in the range from 69% to 72%. With a cut to the lips, a significantly higher accuracy of 70% was achieved than when cut to the entire speaker’s face (34%). With the GRUConv model, the maximum accuracies were 87% with known speakers and 63% in the validation with unknown speakers. CONCLUSION: The neural network for lip reading, which was first developed for the German language, shows a very high level of accuracy, comparable to English-language algorithms. It works with unknown speakers as well and can be generalized with more word classes
    corecore