262 research outputs found
Frontal brain asymmetries as effective parameters to assess the quality of audiovisual stimuli perception in adult and young cochlear implant users
How is music perceived by cochlear implant (CI) users? This question arises as "the next step" given the impressive performance obtained by these patients in language perception. Furthermore, how can music perception be evaluated beyond self-report rating, in order to obtain measurable data? To address this question, estimation of the frontal electroencephalographic (EEG) alpha activity imbalance, acquired through a 19-channel EEG cap, appears to be a suitable instrument to measure the approach/withdrawal (AW index) reaction to external stimuli. Specifically, a greater value of AW indicates an increased propensity to stimulus approach, and vice versa a lower one a tendency to withdraw from the stimulus. Additionally, due to prelingually and postlingually deafened pathology acquisition, children and adults, respectively, would probably differ in music perception. The aim of the present study was to investigate children and adult CI users, in unilateral (UCI) and bilateral (BCI) implantation conditions, during three experimental situations of music exposure (normal, distorted and mute). Additionally, a study of functional connectivity patterns within cerebral networks was performed to investigate functioning patterns in different experimental populations. As a general result, congruency among patterns between BCI patients and control (CTRL) subjects was seen, characterised by lowest values for the distorted condition (vs. normal and mute conditions) in the AW index and in the connectivity analysis. Additionally, the normal and distorted conditions were significantly different in CI and CTRL adults, and in CTRL children, but not in CI children. These results suggest a higher capacity of discrimination and approach motivation towards normal music in CTRL and BCI subjects, but not for UCI patients. Therefore, for perception of music CTRL and BCI participants appear more similar than UCI subjects, as estimated by measurable and not self-reported parameters
Auf einem menschlichen Gehörmodell basierende Elektrodenstimulationsstrategie fĂŒr Cochleaimplantate
ï»żCochleaimplantate (CI), verbunden mit einer professionellen Rehabilitation,
haben mehreren hunderttausenden HörgeschÀdigten die verbale Kommunikation
wieder ermöglicht. Betrachtet man jedoch die Rehabilitationserfolge, so
haben CI-Systeme inzwischen ihre Grenzen erreicht. Die Tatsache, dass die
meisten CI-TrĂ€ger nicht in der Lage sind, Musik zu genieĂen oder einer
Konversation in gerÀuschvoller Umgebung zu folgen, zeigt, dass es noch Raum
fĂŒr Verbesserungen gibt.Diese Dissertation stellt die neue
CI-Signalverarbeitungsstrategie Stimulation based on Auditory Modeling
(SAM) vor, die vollstÀndig auf einem Computermodell des menschlichen
peripheren Hörsystems beruht.Im Rahmen der vorliegenden Arbeit wurde die
SAM Strategie dreifach evaluiert: mit vereinfachten Wahrnehmungsmodellen
von CI-Nutzern, mit fĂŒnf CI-Nutzern, und mit 27 Normalhörenden mittels
eines akustischen Modells der CI-Wahrnehmung. Die Evaluationsergebnisse
wurden stets mit Ergebnissen, die durch die Verwendung der Advanced
Combination Encoder (ACE) Strategie ermittelt wurden, verglichen. ACE
stellt die zurzeit verbreitetste Strategie dar. Erste Simulationen zeigten,
dass die SprachverstÀndlichkeit mit SAM genauso gut wie mit ACE ist.
Weiterhin lieferte SAM genauere binaurale Merkmale, was potentiell zu einer
Verbesserung der SchallquellenlokalisierungfĂ€higkeit fĂŒhren kann. Die
Simulationen zeigten ebenfalls einen erhöhten Anteil an zeitlichen
Pitchinformationen, welche von SAM bereitgestellt wurden. Die Ergebnisse
der nachfolgenden Pilotstudie mit fĂŒnf CI-Nutzern zeigten mehrere Vorteile
von SAM auf. Erstens war eine signifikante Verbesserung der
Tonhöhenunterscheidung bei Sinustönen und gesungenen Vokalen zu erkennen.
Zweitens bestÀtigten CI-Nutzer, die kontralateral mit einem HörgerÀt
versorgt waren, eine natĂŒrlicheren Klangeindruck. Als ein sehr bedeutender
Vorteil stellte sich drittens heraus, dass sich alle Testpersonen in sehr
kurzer Zeit (ca. 10 bis 30 Minuten) an SAM gewöhnen konnten. Dies ist
besonders wichtig, da typischerweise Wochen oder Monate nötig sind. Tests
mit Normalhörenden lieferten weitere Nachweise fĂŒr die verbesserte
Tonhöhenunterscheidung mit SAM.Obwohl SAM noch keine marktreife Alternative
ist, versucht sie den Weg fĂŒr zukĂŒnftige Strategien, die auf Gehörmodellen
beruhen, zu ebnen und ist somit ein erfolgversprechender Kandidat fĂŒr
weitere Forschungsarbeiten.Cochlear implants (CIs) combined with professional rehabilitation have
enabled several hundreds of thousands of hearing-impaired individuals to
re-enter the world of verbal communication. Though very successful, current
CI systems seem to have reached their peak potential. The fact that most
recipients claim not to enjoy listening to music and are not capable of
carrying on a conversation in noisy or reverberative environments shows
that there is still room for improvement.This dissertation presents a new
cochlear implant signal processing strategy called Stimulation based on
Auditory Modeling (SAM), which is completely based on a computational model
of the human peripheral auditory system.SAM has been evaluated through
simplified models of CI listeners, with five cochlear implant users, and
with 27 normal-hearing subjects using an acoustic model of CI perception.
Results have always been compared to those acquired using Advanced
Combination Encoder (ACE), which is todayâs most prevalent CI strategy.
First simulations showed that speech intelligibility of CI users fitted
with SAM should be just as good as that of CI listeners fitted with ACE.
Furthermore, it has been shown that SAM provides more accurate binaural
cues, which can potentially enhance the sound source localization ability
of bilaterally fitted implantees. Simulations have also revealed an
increased amount of temporal pitch information provided by SAM. The
subsequent pilot study, which ran smoothly, revealed several benefits of
using SAM. First, there was a significant improvement in pitch
discrimination of pure tones and sung vowels. Second, CI users fitted with
a contralateral hearing aid reported a more natural sound of both speech
and music. Third, all subjects were accustomed to SAM in a very short
period of time (in the order of 10 to 30 minutes), which is particularly
important given that a successful CI strategy change typically takes weeks
to months. An additional test with 27 normal-hearing listeners using an
acoustic model of CI perception delivered further evidence for improved
pitch discrimination ability with SAM as compared to ACE.Although SAM is
not yet a market-ready alternative, it strives to pave the way for future
strategies based on auditory models and it is a promising candidate for
further research and investigation
Computational processing and analysis of ear images
Tese de mestrado. Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201
Factors that Influence Device Selection by Parents of Pediatric Cochlear Implant Candidates
The purpose of this study was to investigate factors/variables and the importance of those factors during cochlear implant (CI) device selection by parents of recent pediatric CI recipients in the United States. The researcher created an electronic survey and asked audiologists and hearing-related professionals at various hospitals and CI centers across the United States to distribute the survey link to the parents of any of their pediatric CI patients who received CI surgery within the past two years under the age of five years. The survey included both Likert-type and open-ended questions regarding the importance of various factors/variables to the parents during their childâs CI device selection. Results of the study found that the participants ranked reported reliability and speech perception performance of the respective manufacturerâs CI device as the most important factor. Individually, the parents of Cochlear, Ltd. recipients found recommendations from others and the popular brand of the company to be most important; based on a limited sample, parents of Advanced Bionics recipients found the CI deviceâs waterproof capabilities to be most important; and, also based on a limited sample, parents of MED-EL recipients found the reported speech perception performance to be most important
Improvement of Speech Perception for Hearing-Impaired Listeners
Hearing impairment is becoming a prevalent health problem affecting 5% of world adult populations. Hearing aids and cochlear implant already play an essential role in helping patients over decades, but there are still several open problems that prevent them from providing the maximum benefits. Financial and discomfort reasons lead to only one of four patients choose to use hearing aids; Cochlear implant users always have trouble in understanding speech in a noisy environment.
In this dissertation, we addressed the hearing aids limitations by proposing a new hearing aid signal processing system named Open-source Self-fitting Hearing Aids System (OS SF hearing aids). The proposed hearing aids system adopted the state-of-art digital signal processing technologies, combined with accurate hearing assessment and machine learning based self-fitting algorithm to further improve the speech perception and comfort for hearing aids users. Informal testing with hearing-impaired listeners showed that the testing results from the proposed system had less than 10 dB (by average) difference when compared with those results obtained from clinical audiometer. In addition, Sixteen-channel filter banks with adaptive differential microphone array provides up to six-dB SNR improvement in the noisy environment. Machine-learning based self-fitting algorithm provides more suitable hearing aids settings.
To maximize cochlear implant usersâ speech understanding in noise, the sequential (S) and parallel (P) coding strategies were proposed by integrating high-rate desynchronized pulse trains (DPT) in the continuous interleaved sampling (CIS) strategy. Ten participants with severe hearing loss participated in the two rounds cochlear implants testing. The testing results showed CIS-DPT-S strategy significantly improved (11%) the speech perception in background noise, while the CIS-DPT-P strategy had a significant improvement in both quiet (7%) and noisy (9%) environment
The neurobiology of cortical music representations
Music is undeniable one of humanityâs defining traits, as it has been documented since the earliest
days of mankind, is present in all knowcultures and perceivable by all humans nearly alike.
Intrigued by its omnipresence, researchers of all disciplines started the investigation of musicâs
mystical relationship and tremendous significance to humankind already several hundred
years ago. Since comparably recently, the immense advancement of neuroscientific methods
also enabled the examination of cognitive processes related to the processing of music. Within
this neuroscience ofmusic, the vast majority of research work focused on how music, as an auditory
stimulus, reaches the brain and howit is initially processed, aswell as on the tremendous
effects it has on and can evoke through the human brain. However, intermediate steps, that is
how the human brain achieves a transformation of incoming signals to a seemingly specialized
and abstract representation of music have received less attention. Aiming to address this gap,
the here presented thesis targeted these transformations, their possibly underlying processes
and how both could potentially be explained through computational models. To this end, four
projects were conducted. The first two comprised the creation and implementation of two
open source toolboxes to first, tackle problems inherent to auditory neuroscience, thus also affecting
neuroscientific music research and second, provide the basis for further advancements
through standardization and automation. More precisely, this entailed deteriorated hearing
thresholds and abilities in MRI settings and the aggravated localization and parcellation of the
human auditory cortex as the core structure involved in auditory processing. The third project
focused on the humanâs brain apparent tuning to music by investigating functional and organizational
principles of the auditory cortex and network with regard to the processing of different
auditory categories of comparable social importance, more precisely if the perception of music
evokes a is distinct and specialized pattern. In order to provide an in depth characterization
of the respective patterns, both the segregation and integration of auditory cortex regions was
examined. In the fourth and final project, a highly multimodal approach that included fMRI,
EEG, behavior and models of varying complexity was utilized to evaluate how the aforementioned
music representations are generated along the cortical hierarchy of auditory processing
and how they are influenced by bottom-up and top-down processes. The results of project 1
and 2 demonstrated the necessity for the further advancement of MRI settings and definition
of working models of the auditory cortex, as hearing thresholds and abilities seem to vary as
a function of the used data acquisition protocol and the localization and parcellation of the
human auditory cortex diverges drastically based on the approach it is based one. Project 3
revealed that the human brain apparently is indeed tuned for music by means of a specialized
representation, as it evoked a bilateral network with a right hemispheric weight that was not
observed for the other included categories. The result of this specialized and hierarchical recruitment
of anterior and posterior auditory cortex regions was an abstract music component
ix
x SUMMARY
that is situated in anterior regions of the superior temporal gyrus and preferably encodes music,
regardless of sung or instrumental. The outcomes of project 4 indicated that even though
the entire auditory cortex, again with a right hemispheric weight, is involved in the complex
processing of music in particular, anterior regions yielded an abstract representation that varied
excessively over time and could not sufficiently explained by any of the tested models. The
specialized and abstract properties of this representation was furthermore underlined by the
predictive ability of the tested models, as models that were either based on high level features
such as behavioral representations and concepts or complex acoustic features always outperformed
models based on single or simpler acoustic features. Additionally, factors know to influence
auditory and thus music processing, like musical training apparently did not alter the
observed representations. Together, the results of the projects suggest that the specialized and
stable cortical representation of music is the outcome of sophisticated transformations of incoming
sound signals along the cortical hierarchy of auditory processing that generate a music
component in anterior regions of the superior temporal gyrus by means of top-down processes
that interact with acoustic features, guiding their processing.Musik ist unbestreitbarer Weise eine der definierenden Eigenschaften des Menschen. Dokumentiert
seit den fruÌhesten Tagen der Menschheit und in allen bekannten Kulturen vorhanden,
ist sie von allenMenschen nahezu gleichwahrnehmbar. Fasziniert von ihrerOmniprÀsenz
haben Wissenschaftler aller Disziplinen vor einigen hundert Jahren begonnen die mystische
Beziehung zwischen Musik und Mensch, sowie ihre enorme Bedeutung fuÌr selbigen zu untersuchen.
Seit einem vergleichsweise kurzem Zeitraum ist es durch den immensen Fortschritt
neurowissenschafticher Methoden auch möglich die kognitiven Prozesse, welche an der Verarbeitung
von Musik beteiligt, sind zu untersuchen. Innerhalb dieser Neurowissenschaft der
Musik hat sich ein GroĂteil der Forschungsarbeit darauf konzentriert wie Musik, als auditorischer
Stimulus, das menschliche Gehirn erreicht und wie sie initial verarbeitet wird, als auch
welche kolossallen Effekte sie auf selbiges hat und auch dadurch bewirken kann. Jedoch haben
die Zwischenschritte, also wie das menschliche Gehirn eintreffende Signale in eine scheinbar
spezialisierte und abstrakte ReprÀsentation vonMusik umwandelt, vergleichsweise wenig Aufmerksamkeit
erhalten. Um die dadurch entstandene LuÌcke zu adressieren, hat die hier vorliegende
Dissertation diese Prozesse und wie selbige durch Modelle erklÀrt werden können in
vier Projekten untersucht. Die ersten beiden Projekte beinhalteten die Herstellung und Implementierung
von zwei Toolboxen um erstens, inhÀrente Probleme der auditorischen Neurowissenschaft,
daher auch neurowissenschaftlicher Untersuchungen von Musik, zu verbessern
und zweitens, eine Basis fuÌr weitere Fortschritte durch Standardisierung und Automatisierung
zu schaffen. Im genaueren umfasste dies die stark beeintrÀchtigten Hörschwellen und
âfĂ€higkeiten in MRT-Untersuchungen und die erschwerte Lokalisation und Parzellierung des
menschlichen auditorischen Kortex als Kernstruktur auditiver Verarbeitung. Das dritte Projekt
befasste sich mit der augenscheinlichen Spezialisierung von Musik im menschlichen Gehirn
durch die Untersuchung funktionaler und organisatorischer Prinzipien des auditorischen
Kortex und Netzwerks bezuÌglich der Verarbeitung verschiedener auditorischer Kategorien vergleichbarer
sozialer Bedeutung, im genaueren ob die Wahrnehmung von Musik ein distinktes
und spezialisiertes neuronalenMuster hervorruft. Umeine ausfuÌhrliche Charakterisierung
der entsprechenden neuronalen Muster zu ermöglichen wurde die Segregation und Integration
der Regionen des auditorischen Kortex untersucht. Im vierten und letzten Projekt wurde
ein hochmultimodaler Ansatz,welcher fMRT, EEG, Verhalten undModelle verschiedener KomplexitÀt
beinhaltete, genutzt, umzu evaluieren, wie die zuvor genannten ReprÀsentationen von
Musik entlang der kortikalen Hierarchie der auditorischen Verarbeitung generiert und wie sie
möglicherweise durch Bottom-up- und Top-down-AnsÀtze beeinflusst werden. Die Ergebnisse
von Projekt 1 und 2 demonstrierten die Notwendigkeit fuÌr weitere Verbesserungen von MRTUntersuchungen
und die Definition eines Funktionsmodells des auditorischen Kortex, daHörxi
xii ZUSAMMENFASSUNG
schwellen und âfĂ€higkeiten stark in AbhĂ€ngigkeit der verwendeten Datenerwerbsprotokolle
variierten und die Lokalisation, sowie Parzellierung des menschlichen auditorischen Kortex
basierend auf den zugrundeliegenden AnsÀtzen drastisch divergiert. Projekt 3 zeigte, dass das
menschliche Gehirn tatsÀchlich eine spezialisierte ReprÀsentation vonMusik enthÀlt, da selbige
als einzige auditorische Kategorie ein bilaterales Netzwerk mit rechtshemisphÀrischer Gewichtung
evozierte. Aus diesemNetzwerk, welches die Rekrutierung anteriorer und posteriorer
Teile des auditorischen Kortex beinhaltete, resultierte eine scheinbar abstrakte ReprÀsentation
von Musik in anterioren Regionen des Gyrus temporalis superior, welche prÀferiert Musik enkodiert,
ungeachtet ob gesungen oder instrumental. Die Resultate von Projekt 4 deuten darauf
hin, dass der gesamte auditorische Kortex, erneut mit rechtshemisphÀrischer Gewichtung, an
der komplexen Verarbeitung vonMusik beteiligt ist, besonders aber anteriore Regionen, die bereits
genannten abstrakte ReprĂ€sentation hervorrufen, welche sich exzessiv uÌber die Zeitdauer
derWahrnehmung verÀndert und nicht hinreichend durch eines der getestetenModelle erklÀrt
werden kann. Die spezialisierten und abstrakten Eigenschaften dieser ReprÀsentationen wurden
weiterhin durch die prÀdiktiven FÀhigkeiten der getestetenModelle unterstrichen, daModelle,
welche entweder auf höheren Eigenschaften wie VerhaltensreprÀsentationen und mentalen
Konzepten oder komplexen akustischen Eigenschaften basierten, stets Modelle, welche
auf niederen Attributen wie simplen akustischen Eigenschaften basierten, uÌbertrafen. ZusĂ€tzlich
konnte kein Effekt von Faktoren, wie z.B. musikalisches Training, welche bekanntermaĂen
auditorische und daherMusikverarbeitung beeinflussen, nachgewiesen werden.
Zusammengefasst deuten die Ergebnisse der Projekte darauf, hin dass die spezialisierte und
stabile kortikale ReprÀsentation vonMusik ein Resultat komplexer Prozesse ist, welche eintreffende
Signale entlang der kortikalen Hierarchie auditorischer Verarbeitung in eine abstrakte
ReprÀsentation vonMusik innerhalb anteriorer Regionen des Gyrus temporalis superior durch
Top-Down-Prozesse, welche mit akustischen Eigenschaften interagieren und deren Verarbeitung
steuern, umwandeln
Neurocomputing systems for auditory processing
This thesis studies neural computation models and neuromorphic implementations of the auditory pathway with applications to cochlear implants and artiïŹcial auditory sensory and processing systems.
Very low power analogue computation is addressed through the design of micropower analogue building blocks and an auditory preprocessing module targeted at cochlear implants. The analogue building blocks have been fabricated
and tested in a standard Complementary Metal Oxide Silicon (CMOS) process.
The auditory pre-processing module design is based on the cochlea signal processing mechanisms and low power microelectronic design methodologies. Compared to existing preprocessing techniques used in cochlear implants, the proposed design has a wider dynamic range and lower power consumption. Furthermore, it provides the phase coding as well as the place coding information that are necessary for enhanced functionality in future cochlear implants.
The thesis presents neural computation based approaches to a number of signal-processing problems encountered in cochlear implants. Techniques that can improve the performance of existing devices are also presented. Neural network based models for loudness mapping and pattern recognition based channel selection strategies are described. Compared with stateâofâtheâart commercial cochlear implants, the thesis results show that the proposed channel selection model produces superior speech sound qualities; and the proposed loudness mapping model consumes substantially smaller amounts of memory.
Aside from the applications in cochlear implants, this thesis describes a biologically plausible computational model of the auditory pathways to the superior colliculus based on current neurophysiological ïŹndings. The model encapsulates interaural time difference, interaural spectral difference, monaural pathway and auditory space map tuning in the inferior colliculus. A biologically plausible Hebbian-like learning rule is proposed for auditory space neural map tuning, and a reinforcement learning method is used for map alignment with other sensory space maps through activity independent cues.
The validity of the proposed auditory pathway model has been veriïŹed by simulation using synthetic data. Further, a complete biologically inspired auditory simulation system is implemented in software. The system incorporates models of the external ear, the cochlea, as well as the proposed auditory pathway model. The proposed implementation can mimic the biological auditory sensory system to generate an auditory space map from 3âD sounds. A large amount of real 3-D sound signals including broadband White noise, click noise and speech are used in the simulation experiments. The eïŹect of the auditory space map developmental plasticity is examined by simulating early auditory space map formation and auditory space map alignment with a distorted visual sensory map. Detailed simulation methods, procedures and results are presented
- âŠ