7 research outputs found

    ChatGPT's potential role in non-English-speaking outpatient clinic settings

    No full text
    Researchers recently utilized ChatGPT as a tool for composing clinic letters, highlighting its ability to generate accurate and empathetic communications. Here we demonstrated the potential application of ChatGPT as a medical assistant in Mandarin Chinese-speaking outpatient clinics, aiming to improve patient satisfaction in high-patient volume settings. ChatGPT achieved an average score of 72.4% in the Chinese Medical Licensing Examination's Clinical Knowledge section, ranking within the top 20th percentile. It also demonstrated its potential for clinical communication in non-English speaking environments. Our study suggests that ChatGPT could serve as an interface between physicians and patients in Chinese-speaking outpatient settings, possibly extending to other languages. However, further optimization is required, including training on medical-specific datasets, rigorous testing, privacy compliance, integration with existing systems, user-friendly interfaces, and the development of guidelines for medical professionals. Controlled clinical trials and regulatory approval are necessary before widespread implementation. As chatbots’ integration into medical practice becomes more feasible, rigorous early investigations and pilot studies can help mitigate potential risks

    On-Site Biolayer Interferometry-Based Biosensing of Carbamazepine in Whole Blood of Epileptic Patients

    No full text
    On-site monitoring of carbamazepine (CBZ) that allows rapid, sensitive, automatic, and high-throughput detection directly from whole blood is of urgent demand in current clinical practice for precision medicine. Herein, we developed two types (being indirect vs. direct) of fiber-optic biolayer interferometry (FO-BLI) biosensors for on-site CBZ monitoring. The indirect FO-BLI biosensor preincubated samples with monoclonal antibodies towards CBZ (MA-CBZ), and the mixture competes with immobilized CBZ to bind towards MA-CBZ. The direct FO-BLI biosensor used sample CBZ and CBZ-horseradish peroxidase (CBZ-HRP) conjugate to directly compete for binding with immobilized MA-CBZ, followed by a metal precipitate 3,3′-diaminobenzidine to amplify the signals. Indirect FO-BLI detected CBZ within its therapeutic range and was regenerated up to 12 times with negligible baseline drift, but reported results in 25 min. However, Direct FO-BLI achieved CBZ detection in approximately 7.5 min, down to as low as 10 ng/mL, with good accuracy, specificity and negligible matric interference using a high-salt buffer. Validation of Direct FO-BLI using six paired sera and whole blood from epileptic patients showed excellent agreement with ultra-performance liquid chromatography. Being automated and able to achieve high throughput, Direct FO-BLI proved itself to be more effective for integration into the clinic by delivering CBZ values from whole blood within minutes

    Speech decoding using cortical and subcortical electrophysiological signals

    Get PDF
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding

    Table_2_Speech decoding using cortical and subcortical electrophysiological signals.PDF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Table_1_Speech decoding using cortical and subcortical electrophysiological signals.PDF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Image_1_Speech decoding using cortical and subcortical electrophysiological signals.TIF

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p

    Data_Sheet_1_Speech decoding using cortical and subcortical electrophysiological signals.XLSX

    No full text
    IntroductionLanguage impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures’ potential contributions to speech decoding in brain-computer interfaces.MethodsIn this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures’ role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1–30, 30–70, and 70–150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable.ResultsCortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone.DiscussionThis study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.</p
    corecore