41,990 research outputs found

    A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology

    Full text link
    [EN] Research has developed various solutions in order for computers to recognize hand gestures in the context of human machine interface (HMI). The design of a successful hand gesture recognition system must address functionality and usability. The gesture recognition market has evolved from touchpads to touchless sensors, which do not need direct contact. Their application in textiles ranges from the field of medical environments to smart home applications and the automotive industry. In this paper, a textile capacitive touchless sensor has been developed by using screen-printing technology. Two different designs were developed to obtain the best configuration, obtaining good results in both cases. Finally, as a real application, a complete solution of the sensor with wireless communications is presented to be used as an interface for a mobile phone.The work presented is funded by the Conselleria d'Economia Sostenible, Sectors Productius i Treball, through IVACE (Instituto Valenciano de Competitividad Empresarial) and cofounded by ERDF funding from the EU. Application No.: IMAMCI/2019/1. This work was also supported by the Spanish Government/FEDER funds (RTI2018-100910-B-C43) (MINECO/FEDER).Ferri Pascual, J.; Llinares Llopis, R.; Moreno Canton, J.; Ibáñez Civera, FJ.; Garcia-Breijo, E. (2019). A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology. Sensors. 19(23):1-32. https://doi.org/10.3390/s19235068S1321923Chakraborty, B. K., Sarma, D., Bhuyan, M. K., & MacDorman, K. F. (2017). Review of constraints on vision‐based gesture recognition for human–computer interaction. IET Computer Vision, 12(1), 3-15. doi:10.1049/iet-cvi.2017.0052Zhang, Z. (2012). Microsoft Kinect Sensor and Its Effect. IEEE Multimedia, 19(2), 4-10. doi:10.1109/mmul.2012.24Rautaray, S. S. (2012). Real Time Hand Gesture Recognition System for Dynamic Applications. International Journal of UbiComp, 3(1), 21-31. doi:10.5121/iju.2012.3103Karim, R. A., Zakaria, N. F., Zulkifley, M. A., Mustafa, M. M., Sagap, I., & Md Latar, N. H. (2013). Telepointer technology in telemedicine : a review. BioMedical Engineering OnLine, 12(1), 21. doi:10.1186/1475-925x-12-21Santos, L., Carbonaro, N., Tognetti, A., González, J., de la Fuente, E., Fraile, J., & Pérez-Turiel, J. (2018). Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies, 6(1), 8. doi:10.3390/technologies6010008Singh, A., Buonassisi, J., & Jain, S. (2014). Autonomous Multiple Gesture Recognition System for Disabled People. International Journal of Image, Graphics and Signal Processing, 6(2), 39-45. doi:10.5815/ijigsp.2014.02.05Ohn-Bar, E., & Trivedi, M. M. (2014). Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations. IEEE Transactions on Intelligent Transportation Systems, 15(6), 2368-2377. doi:10.1109/tits.2014.2337331Khan, S. A., & Engelbrecht, A. P. (2010). A fuzzy particle swarm optimization algorithm for computer communication network topology design. Applied Intelligence, 36(1), 161-177. doi:10.1007/s10489-010-0251-2Abraham, L., Urru, A., Normani, N., Wilk, M., Walsh, M., & O’Flynn, B. (2018). Hand Tracking and Gesture Recognition Using Lensless Smart Sensors. Sensors, 18(9), 2834. doi:10.3390/s18092834Zeng, Q., Kuang, Z., Wu, S., & Yang, J. (2019). A Method of Ultrasonic Finger Gesture Recognition Based on the Micro-Doppler Effect. Applied Sciences, 9(11), 2314. doi:10.3390/app9112314Lien, J., Gillian, N., Karagozler, M. E., Amihood, P., Schwesig, C., Olson, E., … Poupyrev, I. (2016). Soli. ACM Transactions on Graphics, 35(4), 1-19. doi:10.1145/2897824.2925953Sang, Y., Shi, L., & Liu, Y. (2018). Micro Hand Gesture Recognition System Using Ultrasonic Active Sensing. IEEE Access, 6, 49339-49347. doi:10.1109/access.2018.2868268Ferri, J., Lidón-Roger, J., Moreno, J., Martinez, G., & Garcia-Breijo, E. (2017). A Wearable Textile 2D Touchpad Sensor Based on Screen-Printing Technology. Materials, 10(12), 1450. doi:10.3390/ma10121450Nunes, J., Castro, N., Gonçalves, S., Pereira, N., Correia, V., & Lanceros-Mendez, S. (2017). Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications. Sensors, 17(12), 2786. doi:10.3390/s17122786Ferri, J., Perez Fuster, C., Llinares Llopis, R., Moreno, J., & Garcia‑Breijo, E. (2018). Integration of a 2D Touch Sensor with an Electroluminescent Display by Using a Screen-Printing Technology on Textile Substrate. Sensors, 18(10), 3313. doi:10.3390/s18103313Cronin, S., & Doherty, G. (2018). Touchless computer interfaces in hospitals: A review. Health Informatics Journal, 25(4), 1325-1342. doi:10.1177/1460458217748342Haslinger, L., Wasserthal, S., & Zagar, B. G. (2017). P3.1 - A capacitive measurement system for gesture regocnition. Proceedings Sensor 2017. doi:10.5162/sensor2017/p3.1Cherenack, K., & van Pieterson, L. (2012). Smart textiles: Challenges and opportunities. Journal of Applied Physics, 112(9), 091301. doi:10.1063/1.474272

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods
    corecore