122 research outputs found

    Claves para desarrollar una investigación en ingeniería

    Get PDF

    Use of robotics as a learning aid for disabled children

    Get PDF
    Severe disabled children have little chance of environmental and social exploration and discovery, and due to this lack of interaction and independency, it may lead to an idea that they are unable to do anything by themselves. Trying to help these children on this situation, educational robotics can offer and aid, once it can give them a certain degree of independency in exploration of environment. The system developed in this work allows the child to transmit the commands to a robot. Sensors placed on the child’s body can obtain information from head movement or muscle signals to command the robot to carry out tasks. With the use of this system, the disabled children get a better cognitive development and social interaction, balancing in a certain way, the negative effects of their disabilities

    Editorial

    Get PDF

    Assessment of high-frequency steady-state visual evoked potentials from below-the-hairline areas for a brain-computer interface based on Depth-of-Field

    Get PDF
    Background and Objective: Recently, a promising Brain-Computer Interface based on Steady-State Visual Evoked Potential (SSVEP-BCI) was proposed, which composed of two stimuli presented together in the center of the subject's field of view, but at different depth planes (Depth-of-Field setup). Thus, users were easily able to select one of them by shifting their eye focus. However, in that work, EEG signals were collected through electrodes placed on occipital and parietal regions (hair-covered areas), which demanded a long preparation time. Also, that work used low-frequency stimuli, which can produce visual fatigue and increase the risk of photosensitive epileptic seizures. In order to improve the practicality and visual comfort, this work proposes a BCI based on Depth-of-Field using the high-frequency SSVEP response measured from below-the-hairline areas (behind-the-ears). Methods: Two high-frequency stimuli (31 Hz and 32 Hz) were used in a Depth-of-Field setup to study the SSVEP response from behind-the-ears (TP9 and TP10). Multivariate Spectral F-test (MSFT) method was used to verify the elicited response. Afterwards, a BCI was proposed to command a mobile robot in a virtual reality environment. The commands were recognized through Temporally Local Multivariate Synchronization Index (TMSI) method. Results: The data analysis reveal that the focused stimuli elicit distinguishable SSVEP response when measured from hairless areas, in spite of the fact that the non-focused stimulus is also present in the field of view. Also, our BCI shows a satisfactory result, reaching average accuracy of 91.6% and Information Transfer Rate (ITR) of 5.3 bits/min. Conclusion: These findings contribute to the development of more safe and practical BCI.Fil: Floriano, Alan. Universidade Federal do Espírito Santo; BrasilFil: Delisle Rodriguez, Denis. Universidade Federal do Espírito Santo; BrasilFil: Diez, Pablo Federico. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan; ArgentinaFil: Bastos Filho, Teodiano Freire. Universidade Federal do Espírito Santo; Brasi

    Un enfoque de aprendizaje profundo para estimar la frecuencia respiratoria del fotopletismograma

    Get PDF
    This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset.Este trabajo presenta una metodología para entrenar y probar una red neuronal profunda (Deep Neural Network – DNN) con datos de fotopletismografías (Photoplethysmography – PPG), con la finalidad de llevar a cabo una tarea de regresión para estimar la frecuencia respiratoria (Respiratory Rate – RR). La arquitectura de la DNN está basada en un modelo utilizado para inferir la frecuencia cardíaca (FC) a partir de señales PPG ruidosas. Dicho modelo se ha optimizado a través de algoritmos genéticos. En las pruebas realizadas se usaron BIDMC y CapnoBase, dos conjuntos de datos de acceso abierto. Con CapnoBase, la DNN logró un error de la mediana de 1,16 respiraciones/min, que es comparable con los métodos analíticos reportados en la literatura, donde el mejor error es 1,1 respiraciones/min (excluyendo el 8 % de datos más ruidosos). Por otro lado, el conjunto de datos BIDMC aparenta ser más desafiante, ya que el error mínimo de la mediana de los métodos reportados en la literatura es de 2,3 respiraciones/min (excluyendo el 6 % de datos más ruidosos). Para este conjunto de datos la DNN logra un error de mediana de 1,52 respiraciones/min

    Robotic wheelchair controlled through a vision-based interface

    Get PDF
    In this work, a vision-based control interface for commanding a robotic wheelchair is presented. The interface estimates the orientation angles of the user's head and it translates these parameters in command of maneuvers for different devices. The performance of the proposed interface is evaluated both in static experiments as well as when it is applied in commanding the robotic wheelchair. The interface calculates the orientation angles and it translates the parameters as the reference inputs to the robotic wheelchair. Control architecture based on the dynamic model of the wheelchair is implemented in order to achieve safety navigation. Experimental results of the interface performance and the wheelchair navigation are presented.Fil: Perez, Elisa. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Bastos, Teodiano Freire. Universidade Federal do Espírito Santo; BrasilFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    A novel human-machine interface for guiding : the NeoASAS Smart Walker

    Get PDF
    In an aging society it is extremely important to develop devices, which can support and aid the elderly in their daily life. This demands tools that extend independent living and promote improved health. In this work it is proposed a new interface approach integrated into a walker. This interface is based on a joystick and it is intended to extract the user’s movement intentions. The interface is designed to be userfriendly, simple and intuitive, efficient and economic, meeting usability aspects and focused on a commercial implementation, but not being demanding at the user cognitive level. Preliminary sets of experiments were performed which showed the sensibility of the joystick to extract navigation commands from the user. These signals presented a higher frequency component that was attenuated by a Benedict-Bordner g-h filter. The presented methodology offers an effective cancelation of the undesired components from joystick data, allowing the system to extract in real-time voluntary user’s navigation commands. Based on this real-time identification of voluntary user’s commands, an approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.(undefined

    Reconocimiento en-línea de acciones humanas basado en patrones de RWE aplicado en ventanas dinámicas de momentos invariantes

    Get PDF
    [EN] This paper presents a methodology for online human action recognition on video sequences. It addresses an efficient approach to use invariant moments as image descriptors, applied in processing silhouettes obtained from depth maps. A quick comparison between size-4 windows (equivalent to 4 frames) is performed by computing the Mahalanobis distance, on one of the invariant moment sequences identified as less sensitive to noise and more stable during movement absence. This approach is used for rapid detection of the idle/motion state, which allows the capture of dynamic growth intervals (windows) for further processing, rescuing from the signal contained their temporal and frequential properties. By applying the Haar wavelet transform, three decomposition levels are used for calculating Relative Wavelet Energy (RWE - Relative Wavelet Energy) and SSC (Slope Sign Change), obtaining 11-dimensional patterns. In experiments, 97 % of 4 movements online-captured were recognized correctly, and 10 movements taken from Muhavi-MAS database were recognized with 94.2 % efficiency[ES] En este trabajo se presenta una metodología para el reconocimiento en-línea de acciones humanas en secuencias de vídeo. Se aborda un enfoque eficiente para el uso de momentos invariantes como descriptores de imagen, aplicados en siluetas obtenidas del procesamiento de mapas de profundidad. Una comparación rápida entre ventanas de tamaño 4 (equivalente a 4 frames) es realizada mediante el cómputo de la distancia de Mahalanobis, sobre una de las secuencias de momentos invariantes identificada como la menos sensible al ruido de captura y la más estable durante ausencia de movimiento. Este enfoque es usado para la detección rápida del estado de parada/movimiento, el cual permite la captura de intervalos (ventanas) de crecimiento dinámico para su posterior procesamiento, rescatando de la señal contenida sus propiedades temporales y frecuenciales. Mediante la aplicación de la transformada Wavelet Haar, tres niveles de descomposición son utilizados para el cómputo de la Energía Relativa Wavelet (RWE - Relative Wavelet Energy) y SSC (Slope Sign Change), obteniendo patrones 11-dimensionales. En experimentos realizados, el 97% de 4 movimientos capturados en-línea fueron reconocidos correctamente, y 10 movimientos tomados de la base de datos Muhavi-MAS fueron reconocidos con 94,2% de efectividad.Este proyecto de investigacion es financiado por el Programa Primeros Proyectos, CNPq/FAPES No. 02/2011 y por el CNPq a traves de beca de doctorado para el primer autor.Romero López, D.; Frizera Neto, A.; Freire Bastos, T. (2014). Reconocimiento en-línea de acciones humanas basado en patrones de RWE aplicado en ventanas dinámicas de momentos invariantes. Revista Iberoamericana de Automática e Informática industrial. 11(2):202-211. https://doi.org/10.1016/j.riai.2013.09.009OJS202211112Antoniou, A., 2005. Digital Signal Processing. McGraw-Hill.Broggi, A., Bertozzi, M., Fascioli, A., Sechi, M., 2000. Shape-based pedestrian detection. In: Intelligent Vehicles Symposium, 2000. IV 2000. Proceedings of the IEEE. pp. 215-220.Chan, T., Vese, L., feb 2001. Active contours without edges. Image Processing, IEEE Transactions on 10 (2), 266-277.Chen, Q., Petriu, E., Yang, X., may 2004. A comparative study of fourier descriptors and hu's seven moment invariants for image recognition. In: Electrical and Computer Engineering, 2004. Canadian Conference on. Vol. 1. pp. 103-106 Vol.1.Chockalingam, P., Pradeep, N., Birchfield, S., oct. 2009. Adaptive fragments- based tracking of non-rigid objects using level sets. In: Computer Vision, 2009 IEEE 12th International Conference on. pp. 1530-1537.Cifuentes, C., Braidot, A., Rodriguez, L., Frisoli, M., Santiago, A., Frizera, A., june 2012. Development of a wearable zigbee sensor system for upper limb rehabilitation robotics. In: Biomedical Robotics and Biomechatronics (Bio- Rob), 2012 4th IEEE RAS EMBS International Conference on. pp. 1989-1994. DOI: 10.1109/BioRob.2012.6290926.Franke, U., Joos, A., 2000. Real-time stereo vision for urban traffic scene understanding. In: Intelligent Vehicles Symposium. Proceedings of the IEEE. pp. 273-278.Garcia-Costa, C., Egea-Lopez, E., Tomas-Gabarron, J., Garcia-Haro, J., Haas, Z., 2011. A stochastic model for chain collisions of vehicles equipped with vehicular communications. Intelligent Transportation Systems, IEEE Transactions on 13, 503-518.Geronimo, D., Lopez, A., Sappa, D., july 2010. Survey of pedestrian detection for advanced driver assistance systems. Pattern Analysis and Machine Intelligence, IEEE Transactions on 32 (7), 1239-1258.Gonzalez, R.C., 2010. Digital Image Processing, 2nd Edition. McGraw-Hill.Grubb, G., Zelinsky, A., Nilsson, L., Rilbe, M., june 2004. 3d vision sensing for improved pedestrian safety. In: Intelligent Vehicles Symposium, IEEE. pp. 19-24.Haibin, Z., Xu, W., Hong, W., may 2008. Feature selection using relative wavelet energy for brain-computer interface design. In: Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International Conference on. pp. 1434-1437.Hu, X., Kong, B., Zheng, F., Wang, S., july 2007. Image recognition based on wavelet invariant moments and wavelet neural networks. In: Information Acquisition, 2007. ICIA ‘07. International Conference on. pp. 275-279.Huang, Z., Leng, J., april 2010. Analysis of hu's moment invariants on image scaling and rotation. In: Computer Engineering and Technology (ICCET), 2010 2nd International Conference on. Vol. 7. pp. V7-476 –V7-480.Itti, L., Koch, C., Niebur, E., nov 1998. A model of saliency-based visual attention for rapid scene analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on 20 (11), 1254-1259.Jones, M., Snow, D., dec. 2008. Pedestrian detection using boosted features over many frames. In: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on. pp. 1-4.Knoll, P., 2007. Hdr vision for driver assistance. In: Hoefflinger, B. (Ed.), High- Dynamic-Range (HDR) Vision. Vol. 26. Springer, pp. 123-136.Marsi, S., Impoco, G., Ukovich, A., Ramponi, G., 2007. Video enhancement and dynamic range control of hdr sequences for automotive applications. Advances in Signal Processing (EURASIP) 2007, 9.Mercimek, M., Gulez, K., Mumcu, T., 2005. Real object recognition using moment invariants. Sadhna - Acad. Proc. Eng. Sci. 30, 765-775.Miau, F., Papageorgiou, C.S., Itti, L., 2001. Neuromorphic algorithms for computer vision and attention. Proc.Intl Symp. Optical Science and Technology 01 (46), 12-23.Moeslund, T., Kruger, V., 2006. A survey of advances in vision-based human motion capture and analysis. Computer Vision and Image Understanding 103, 90-126.Nayar, S., Branzoi, V., 2003. Adaptive dynamic range imaging: optical control of pixel exposures over space and time. In: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. pp. 1168-1175 vol.2.Park, S., Trivedi, M., 2007. Multi-person interaction and activity analysis: a synergistic track- and body-level analysis framework. Machine Vision and Applications 18, 151-166.Park, S., Trivedi, M., jul 2008. Understanding human interactions with track and body synergies (tbs) captured from multiple views. Computer Vision and Image Understanding 111 (1), 2-20.Phinyomark, A., Limsakul, C., Phukpattaranont, P., 2009. A novel feature extraction for robust emg pattern recognition. CoRR abs/0912.3973.Poppe, R., 2010. A survey on vision-based human action recognition. Image and Vision Computing 28 (6), 976-990.Qiao, Y., Wang, X., Xu, C., june 2011. Learning mahalanobis distance for dtw based online signature verification. In: Information and Automation (ICIA), 2011 IEEE International Conference on. pp. 333-338.Rabiner, L., feb 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77 (2), 257-286.Romero, D., Frizera, A., Bastos, T., jan. 2012a. Movement analysis in learning by repetitive recall. an approach for automatic assistance in physiotherapy. In: Biosignals and Biorobotics Conference (BRC), 2012 ISSNIP. pp. 1-8.Romero, D., Vintimilla, B., Frizera, A., Bastos, T.F., jun 2012b. Rwe patterns extraction for on-line human action recognition through window-based analysis of invariant moments. In: Robocontrol (2012). Bauru - SP, pp. 20-27.Rosso, O., Martin, M., Plastino, A., 2003. Brain electrical activity analysis using wavelet-based informational tools (ii): Tsallis non-extensivity and complexity measures. Physica A: Statistical Mechanics and its Applications 320 (0), 497-511.Rosso, O.A., Blanco, S., Yordanova, J., Kolev, V., Figliola, A., Schurmann, M., Basar, E., 2001. Wavelet entropy: a new tool for analysis of short duration brain electrical signals. Journal of Neuroscience Methods 105 (1), 65-75.Salas-Lopez, G., Sandoval-Gonzalez, O., Herrera-Aguilar, I., Martà nez-Sibaja, A., Portillo-Rodriguez, O., Vilchis-Gonzalez, A., 2012. Design and development of a planar robot for upper extremities rehabilitation with visuovibrotactile feedback. Procedia Technology 3, 147-156.Sarvaiya, J.N., 2011. Automatic image registration using mexican hat wavelet, invariant moment, and radon transform. IJACSA - International Journal of Advanced Computer Science and Applications 01 (Special Issue), 75-84.Singh, S., Velastin, S., Ragheb, H., september 2010. Muhavi: A multicamera human action video dataset for the evaluation of action recognition methods. In: Advanced Video and Signal Based Surveillance (AVSS), 2010 Seventh IEEE International Conference on. pp. 48-55.Soga, M., Kato, T., Ohta, M., Ninomiya, Y., april 2005. Pedestrian detection with stereo vision. In: Data Engineering Workshops. 21st International Conference on. Vol. 01. pp. 20-28.Viola, P., Jones, M., Snow, D., oct. 2003. Detecting pedestrians using patterns of motion and appearance. In: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. Vol. 2. pp. 734-741.Wang, L., Hu, W., T. Tan, 2003. Recent developments in human motion analysis. Pattern Recognition 36, 585-601.Yan, L., Casperson, D., Chen, L., june 2011. Survey: Dimension reduction by pattern decomposition. In: Modelling, Identification and Control (ICMIC), Proceedings of 2011 International Conference on. pp. 69-74

    Human-machine interfaces based on EMG and EEG applied to robotic systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well.</p> <p>Results</p> <p>Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively.</p> <p>Conclusion</p> <p>Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.</p

    Un enfoque de aprendizaje profundo para estimar la frecuencia respiratoria del fotopletismograma

    Get PDF
    Este trabajo presenta una metodología para entrenar y probar una red neuronal profunda (Deep Neural Network – DNN) con datos de fotopletismografías (Photoplethysmography – PPG), con la finalidad de llevar a cabo una tarea de regresión para estimar la frecuencia respiratoria (Respiratory Rate – RR). La arquitectura de la DNN está basada en un modelo utilizado para inferir la frecuencia cardíaca (FC) a partir de señales PPG ruidosas. Dicho modelo se ha optimizado a través de algoritmos genéticos. En las pruebas realizadas se usaron BIDMC y CapnoBase, dos conjuntos de datos de acceso abierto. Con CapnoBase, la DNN logró un error de la mediana de 1,16 respiraciones/min, que es comparable con los métodos analíticos reportados en la literatura, donde el mejor error es 1,1 respiraciones/min (excluyendo el 8 % de datos más ruidosos). Por otro lado, el conjunto de datos BIDMC aparenta ser más desafiante, ya que el error mínimo de la mediana de los métodos reportados en la literatura es de 2,3 respiraciones/min (excluyendo el 6 % de datos más ruidosos). Para este conjunto de datos la DNN logra un error de mediana de 1,52 respiraciones/min.//This article describes the methodology used to train and test a Deep Neural Network (DNN) with Photoplethysmography (PPG) data performing a regression task to estimate the Respiratory Rate (RR). The DNN architecture is based on a model used to infer the heart rate (HR) from noisy PPG signals, which is optimized to the RR problem using genetic optimization. Two open-access datasets were used in the tests, the BIDMC and the CapnoBase. With the CapnoBase dataset, the DNN achieved a median error of 1.16 breaths/min, which is comparable with analytical methods in the literature, in which the best error found is 1.1 breaths/min (excluding the 8 % noisiest data). The BIDMC dataset seems to be more challenging, as the minimum median error of the literature’s methods is 2.3 breaths/min (excluding 6 % of the noisiest data), and the DNN based approach achieved a median error of 1.52 breaths/min with the whole dataset
    corecore