21 research outputs found

    Comparative analysis of acoustic therapies for tinnitus treatment based on auditory event-related potentials

    Get PDF
    IntroductionSo far, Auditory Event-Related Potential (AERP) features have been used to characterize neural activity of patients with tinnitus. However, these EEG patterns could be used to evaluate tinnitus evolution as well. The aim of the present study is to propose a methodology based on AERPs to evaluate the effectiveness of four acoustic therapies for tinnitus treatment.MethodsThe acoustic therapies were: (1) Tinnitus Retraining Therapy (TRT), (2) Auditory Discrimination Therapy (ADT), (3) Therapy for Enriched Acoustic Environment (TEAE), and (4) Binaural Beats Therapy (BBT). In addition, relaxing music was included as a placebo for both: tinnitus sufferers and healthy individuals. To meet this aim, 103 participants were recruited, 53% were females and 47% were males. All the participants were treated for 8 weeks with one of these five sounds, which were moreover tuned in accordance with the acoustic features of their tinnitus (if applied) and hearing loss. They were electroencephalographically monitored before and after their acoustic therapy, and wherefrom AERPs were estimated. The sound effect of acoustic therapies was evaluated by examining the area under the curve of those AERPs. Two parameters were obtained: (1) amplitude and (2) topographical distribution.ResultsThe findings of the investigation showed that after an 8-week treatment, TRT and ADT, respectively achieved significant neurophysiological changes over somatosensory and occipital regions. On one hand, TRT increased the tinnitus perception. On the other hand, ADT redirected the tinnitus attention, what in turn diminished the tinnitus perception. Tinnitus handicapped inventory outcomes verified these neurophysiological findings, revealing that 31% of patients in each group reported that TRT increased tinnitus perception, but ADT diminished it.DiscussionTinnitus has been identified as a multifactorial condition highly associated with hearing loss, age, sex, marital status, education, and even, employment. However, no conclusive evidence has been found yet. In this study, a significant (but low) correlation was found between tinnitus intensity and right ear hearing loss, left ear hearing loss, heart rate, area under the curve of AERPs, and acoustic therapy. This study raises the possibility to assign acoustic therapies by neurophysiological response of patient

    Colloidal Solutions with Silicon Nanocrystals: Structural and Optical Properties

    Get PDF
    In this work, colloidal solutions with silicon nanoparticles using different solvents were synthetized. Structural, morphological and optical characterizations were realized, and these were studied. X-ray diffraction (XRD) was used to measure the diffractograms of the colloidal solutions, which are composed of silicon nanocrystals (Si-ncs), with an average size of approximately 3 nm, and a preferential crystalline orientation (311). Atomic force microscopy (AFM) images show that the morphology of silicon nanoparticles (Si-nps) is agglomerated in a big amount, which is corroborated by means of the roughness. On the other hand, high resolution transmission electronic microscopy (HRTEM) images show on average size of the Si-nc ranging from 1.5 to 10 nm, which depends on the solvent used. Also, different preferential crystalline orientations of the Si-nc such as (311), (220) and (111) were obtained. A correlation between the optical and structural properties was realized in colloidal solutions with silicon nanoparticles and different solvents

    EEG signals from tinnitus sufferers at identifying their sound tinnitus

    Get PDF
    The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide spontaneous Electroencephalographic (EEG) activity at rest, and evoked EEG activity when tinnitus sufferers attempt to identify their sound tinnitus among 54 tinnitus sound examples. For the database, 37 volunteers were recruited: 15 ones without tinnitus (Control Group – CG), and 22 ones with tinnitus (Tinnitus Group – TG). For EEG recording, 30 channels were used to record two conditions: 1) basal condition, where the volunteer remained in a state of rest with the open eyes for two minutes; and 2) active condition, where the volunteer must have identified his/her sound stimulus by pressing a key. For the active condition, a sound-tinnitus library was generated in accordance with the most typical acoustic properties of tinnitus. The library consisted in ten pure tones (250 Hz, 500 Hz, 1 kHz, 2 kHz, 3 kHz, 3.5 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz), a White Noise (WN), a Narrow Band noise-High frequencies (NBH, 4 kHz–10 kHz), a Narrow Band noise-Medium frequencies (NBM,1 kHz–4 kHz), a Narrow-Band noise Low frequencies (NBL, 250 Hz–1 kHz), ten pure tones combined with WN, ten pure tones superimposed with NBH, ten tones with NBM and ten pure tones combined with NBL. In total, 54 sound-tinnitus were applied for both groups. In the case of CG, volunteers must have identified a sound at 3.5 kHz. In addition to EEG information, a csv-file with audiometric and psychoacoustic information of volunteers is provided. For TG, this information refers to: 1) hearing level, 2) type of tinnitus, 3) tinnitus frequency, 4) tinnitus perception, 5) Hospital Anxiety and Depression Scale (HADS) and 6) Tinnitus Functional Index (TFI). For CG, the information refers to: 1) hearing level, and 2) HADS.</p

    EEG signals from tinnitus sufferers at identifying their sound tinnitus

    Get PDF
    The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide spontaneous Electroencephalographic (EEG) activity at rest, and evoked EEG activity when tinnitus sufferers attempt to identify their sound tinnitus among 54 tinnitus sound examples. For the database, 37 volunteers were recruited: 15 ones without tinnitus (Control Group – CG), and 22 ones with tinnitus (Tinnitus Group – TG). For EEG recording, 30 channels were used to record two conditions: 1) basal condition, where the volunteer remained in a state of rest with the open eyes for two minutes; and 2) active condition, where the volunteer must have identified his/her sound stimulus by pressing a key. For the active condition, a sound-tinnitus library was generated in accordance with the most typical acoustic properties of tinnitus. The library consisted in ten pure tones (250 Hz, 500 Hz, 1 kHz, 2 kHz, 3 kHz, 3.5 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz), a White Noise (WN), a Narrow Band noise-High frequencies (NBH, 4 kHz–10 kHz), a Narrow Band noise-Medium frequencies (NBM,1 kHz–4 kHz), a Narrow-Band noise Low frequencies (NBL, 250 Hz–1 kHz), ten pure tones combined with WN, ten pure tones superimposed with NBH, ten tones with NBM and ten pure tones combined with NBL. In total, 54 sound-tinnitus were applied for both groups. In the case of CG, volunteers must have identified a sound at 3.5 kHz. In addition to EEG information, a csv-file with audiometric and psychoacoustic information of volunteers is provided. For TG, this information refers to: 1) hearing level, 2) type of tinnitus, 3) tinnitus frequency, 4) tinnitus perception, 5) Hospital Anxiety and Depression Scale (HADS) and 6) Tinnitus Functional Index (TFI). For CG, the information refers to: 1) hearing level, and 2) HADS.</p

    Deep-Learning Method Based on 1D Convolutional Neural Network for Intelligent Fault Diagnosis of Rotating Machines

    No full text
    Fault diagnosis in high-speed machining centers (HSM) is critical in manufacturing systems, since early detection saves a substantial amount of time and money. It is known that 42% of failures in these centers occur in rotatory machineries, such as spindles, in which, the bearings are fundamental elements for effective operation. Nowadays, there are several machine- and deep-learning methods to diagnose the faults. To improve the performance of those traditional machine-learning tools, a deep-learning network that works on raw signals, which do not require previous analysis, has been proposed. The 1D Convolutional Neural Network (CNN) proposed model showed great capacity of adapting to three types of configurations and three different databases, despite a training set with a smaller number of categories. The network still detected faults at early damage stages. Additionally, the low computational cost shows the Deep-Learning Neural Network’s (DLNN) suitability for real-time applications in industry. The proposed structure reached a precision of 99%; real-time processing was around 8 ms per signal, and standard deviation of repeatability was 0.25%

    Mexican Emotional Speech Database Based on Semantic, Frequency, Familiarity, Concreteness, and Cultural Shaping of Affective Prosody

    No full text
    In this paper, the Mexican Emotional Speech Database (MESD) that contains single-word emotional utterances for anger, disgust, fear, happiness, neutral and sadness with adult (male and female) and child voices is described. To validate the emotional prosody of the uttered words, a cubic Support Vector Machines classifier was trained on the basis of prosodic, spectral and voice quality features for each case study: (1) male adult, (2) female adult and (3) child. In addition, cultural, semantic, and linguistic shaping of emotional expression was assessed by statistical analysis. This study was registered at BioMed Central and is part of the implementation of a published study protocol. Mean emotional classification accuracies yielded 93.3%, 89.4% and 83.3% for male, female and child utterances respectively. Statistical analysis emphasized the shaping of emotional prosodies by semantic and linguistic features. A cultural variation in emotional expression was highlighted by comparing the MESD with the INTERFACE for Castilian Spanish database. The MESD provides reliable content for linguistic emotional prosody shaped by the Mexican cultural environment. In order to facilitate further investigations, a corpus controlled for linguistic features and emotional semantics, as well as one containing words repeated across voices and emotions are provided. The MESD is made freely available

    Archaeoacoustics around the World: A Literature Review (2016–2022)

    No full text
    Acoustics has been integrated with archaeology to better understand the social and cultural context of past cultures. Specifically, public events such as rituals or ceremonies, where an appreciation of sound propagation was required to hold an event. Various acoustic techniques have been used to study archaeological sites, providing information about the building characteristics and organizational structures of ancient civilizations. This review aims to present recent advances in Archaeoacoustics worldwide over the last seven years (2016–2022). For this purpose, one hundred and five articles were identified and categorized into two topics: (1) Archaeoacoustics in places, and (2) Archaeoacoustics of musical instruments and pieces. In the first topic, three subtopics were identified: (1) measurement and characterization of places, (2) rock art, and (3) simulation, auralization, and virtualization. Regarding the first subtopic, it was identified that the standards for reverberation times in enclosures are generally applied in their development. In the second subtopic, it was determined that the places selected to make paintings were areas with long reverberation time. The last subtopic, simulation, auralization, and virtualization, is the area of most remarkable growth and innovation. Finally, this review opens the debate to seek standardization of a measurement method that allows comparing results from different investigations

    User-centric networking and services: part 2 [Guest Editorial]

    No full text
    Iser-centric networks (UCNs) can be seen as a recent architectural trend of self-organizing autonomic networks where the Internet end user cooperates by sharing network services and resources. UCNs are characterized by spontaneous and grassroots deployments of wireless architectures, where users on such environments roam frequently and are also owners of networking equipment. Common to UCNs is a social behavior that heavily impacts network operation from an end-to-end perspective
    corecore