24 research outputs found

    Un enfoque de aprendizaje profundo para estimar la frecuencia respiratoria del fotopletismograma

    Get PDF
    This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset.Este trabajo presenta una metodología para entrenar y probar una red neuronal profunda (Deep Neural Network – DNN) con datos de fotopletismografías (Photoplethysmography – PPG), con la finalidad de llevar a cabo una tarea de regresión para estimar la frecuencia respiratoria (Respiratory Rate – RR). La arquitectura de la DNN está basada en un modelo utilizado para inferir la frecuencia cardíaca (FC) a partir de señales PPG ruidosas. Dicho modelo se ha optimizado a través de algoritmos genéticos. En las pruebas realizadas se usaron BIDMC y CapnoBase, dos conjuntos de datos de acceso abierto. Con CapnoBase, la DNN logró un error de la mediana de 1,16 respiraciones/min, que es comparable con los métodos analíticos reportados en la literatura, donde el mejor error es 1,1 respiraciones/min (excluyendo el 8 % de datos más ruidosos). Por otro lado, el conjunto de datos BIDMC aparenta ser más desafiante, ya que el error mínimo de la mediana de los métodos reportados en la literatura es de 2,3 respiraciones/min (excluyendo el 6 % de datos más ruidosos). Para este conjunto de datos la DNN logra un error de mediana de 1,52 respiraciones/min

    Human-machine interfaces based on EMG and EEG applied to robotic systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well.</p> <p>Results</p> <p>Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively.</p> <p>Conclusion</p> <p>Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.</p

    Un enfoque de aprendizaje profundo para estimar la frecuencia respiratoria del fotopletismograma

    Get PDF
    Este trabajo presenta una metodología para entrenar y probar una red neuronal profunda (Deep Neural Network – DNN) con datos de fotopletismografías (Photoplethysmography – PPG), con la finalidad de llevar a cabo una tarea de regresión para estimar la frecuencia respiratoria (Respiratory Rate – RR). La arquitectura de la DNN está basada en un modelo utilizado para inferir la frecuencia cardíaca (FC) a partir de señales PPG ruidosas. Dicho modelo se ha optimizado a través de algoritmos genéticos. En las pruebas realizadas se usaron BIDMC y CapnoBase, dos conjuntos de datos de acceso abierto. Con CapnoBase, la DNN logró un error de la mediana de 1,16 respiraciones/min, que es comparable con los métodos analíticos reportados en la literatura, donde el mejor error es 1,1 respiraciones/min (excluyendo el 8 % de datos más ruidosos). Por otro lado, el conjunto de datos BIDMC aparenta ser más desafiante, ya que el error mínimo de la mediana de los métodos reportados en la literatura es de 2,3 respiraciones/min (excluyendo el 6 % de datos más ruidosos). Para este conjunto de datos la DNN logra un error de mediana de 1,52 respiraciones/min.//This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset

    Evaluating the Influence of Chromatic and Luminance Stimuli on SSVEPs from Behind-the-Ears and Occipital Areas

    No full text
    This work presents a study of chromatic and luminance stimuli in low-, medium-, and high-frequency stimulation to evoke steady-state visual evoked potential (SSVEP) in the behind-the-ears area. Twelve healthy subjects participated in this study. The electroencephalogram (EEG) was measured on occipital (Oz) and left and right temporal (TP9 and TP10) areas. The SSVEP was evaluated in terms of amplitude, signal-to-noise ratio (SNR), and detection accuracy using power spectral density analysis (PSDA), canonical correlation analysis (CCA), and temporally local multivariate synchronization index (TMSI) methods. It was found that stimuli based on suitable color and luminance elicited stronger SSVEP in the behind-the-ears area, and that the response of the SSVEP was related to the flickering frequency and the color of the stimuli. Thus, green-red stimulus elicited the highest SSVEP in medium-frequency range, and green-blue stimulus elicited the highest SSVEP in high-frequency range, reaching detection accuracy rates higher than 80%. These findings will aid in the development of more comfortable, accurate and stable BCIs with electrodes positioned on the behind-the-ears (hairless) areas
    corecore