5 research outputs found

    Tough Ion-Conductive Hydrogel with Anti-Dehydration as a Stretchable Strain Sensor for Gesture Recognition

    No full text
    Stretchable hydrogel-based strain sensors have attracted considerable interest for their potential applications in human motion detection, physiological monitoring, and electronic skin. Yet, the durability of hydrogel sensors is seriously hindered due to inevitable water evaporation and weak mechanical properties. Herein, we report modified polyampholyte (PA) hydrogels that show anti-dehydration and ion conductivity via a simple metal-ion solution soaking and drying-out strategy. In this strategy, an as-prepared PA hydrogel (with ionic bonds) is dialyzed in FeCl3 solutions (Step-I) and annealed at 65 °C for (Step-II) successively to reconstruct its network with numerous synergistic ionic and metal–ligand bonds. The resulting hydrogels demonstrate superior mechanical properties, ultralong anti-dehydration life span (>30 days), and high ion conductivity (≈15 S m–1). To understand the reinforcement mechanisms, we evaluate the viscoelastic and elastic contributions to the mechanical properties of the hydrogels via a viscoelastic model. This gel can be further engineered to a stretchable strain sensor to recognize hand gestures with a well-trained machine learning algorithm. The proposed strategy is straightforward and effective for achieving anti-dehydration and highly ion-conductive tough hydrogels

    Table_2_Speech decoding using cortical and subcortical electrophysiological signals.PDF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Table_1_Speech decoding using cortical and subcortical electrophysiological signals.PDF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Data_Sheet_1_Speech decoding using cortical and subcortical electrophysiological signals.XLSX

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Image_1_Speech decoding using cortical and subcortical electrophysiological signals.TIF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p
    corecore