25 research outputs found

    Hand tracking and gesture recognition using lensless smart sensors

    Get PDF
    The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated computational algorithms, allow point tracking down to millimeter-level accuracy. This work is focused on developing novel algorithms for the detection of multiple points and thereby enabling hand tracking and gesture recognition using the LSS. The algorithms are formulated based on geometrical and mathematical constraints around the placement of infrared light-emitting diodes (LEDs) on the hand. The developed techniques dynamically adapt the recognition and orientation of the hand and associated gestures. A detailed accuracy analysis for both hand tracking and gesture classification as a function of LED positions is conducted to validate the performance of the system. Our results indicate that the technology is a promising approach, as the current state-of-the-art focuses on human motion tracking that requires highly complex and expensive systems. A wearable, low-power, low-cost system could make a significant impact in this field, as it does not require complex hardware or additional sensors on the tracked segments

    3D ranging and tracking using lensless smart sensors

    Get PDF
    Target tracking has a wide range of applications in Internet of Things (IoT), such as smart city sensors, indoor tracking, and gesture recognition. Several studies have been conducted in this area. Most of the published works either use vision sensors or inertial sensors for motion analysis and gesture recognition [1, 2]. Recent works use a combination of depth sensors and inertial sensors for 3D ranging and tracking [3, 4]. This often requires complex hardware and the use of complex embedded algorithms. Stereo cameras or Kinect depth sensors used for high precision ranging are instead expensive and not easy to use. The aim of this work is to track in 3D a hand fitted with a series of precisely positioned IR LEDs using a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. [5, 6]. In the adopted device, the lens used in conventional cameras is replaced by low-cost ultra-miniaturized diffraction optics attached directly to the image sensor array. The unique diffraction pattern enables more precise position tracking than possible with a lens by capturing more information about the scene

    A machine learning approach for gesture recognition with a lensless smart sensor system

    Get PDF
    Hand motion tracking traditionally requires highly complex and expensive systems in terms of energy and computational demands. A low-power, low-cost system could lead to a revolution in this field as it would not require complex hardware while representing an infrastructure-less ultra-miniature (~ 100μm - [1]) solution. The present paper exploits the Multiple Point Tracking algorithm developed at the Tyndall National Institute as the basic algorithm to perform a series of gesture recognition tasks. The hardware relies upon the combination of a stereoscopic vision of two novel Lensless Smart Sensors (LSS) combined with IR filters and five hand-held LEDs to track. Tracking common gestures generates a six-gestures dataset, which is then employed to train three Machine Learning models: k-Nearest Neighbors, Support Vector Machine and Random Forest. An offline analysis highlights how different LEDs' positions on the hand affect the classification accuracy. The comparison shows how the Random Forest outperforms the other two models with a classification accuracy of 90-91 %

    Point tracking with lensless smart sensors

    Get PDF
    This paper presents the applicability of a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. in 3D positioning and tracking. The unique diffraction pattern attached to the sensor enables more precise position tracking than possible with lenses by capturing more information about the scene. In this work, the sensor characteristics is assessed and accuracy analysis is accomplished for the single point tracking scenario

    A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology

    Full text link
    [EN] Research has developed various solutions in order for computers to recognize hand gestures in the context of human machine interface (HMI). The design of a successful hand gesture recognition system must address functionality and usability. The gesture recognition market has evolved from touchpads to touchless sensors, which do not need direct contact. Their application in textiles ranges from the field of medical environments to smart home applications and the automotive industry. In this paper, a textile capacitive touchless sensor has been developed by using screen-printing technology. Two different designs were developed to obtain the best configuration, obtaining good results in both cases. Finally, as a real application, a complete solution of the sensor with wireless communications is presented to be used as an interface for a mobile phone.The work presented is funded by the Conselleria d'Economia Sostenible, Sectors Productius i Treball, through IVACE (Instituto Valenciano de Competitividad Empresarial) and cofounded by ERDF funding from the EU. Application No.: IMAMCI/2019/1. This work was also supported by the Spanish Government/FEDER funds (RTI2018-100910-B-C43) (MINECO/FEDER).Ferri Pascual, J.; Llinares Llopis, R.; Moreno Canton, J.; Ibáñez Civera, FJ.; Garcia-Breijo, E. (2019). A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology. Sensors. 19(23):1-32. https://doi.org/10.3390/s19235068S1321923Chakraborty, B. K., Sarma, D., Bhuyan, M. K., & MacDorman, K. F. (2017). Review of constraints on vision‐based gesture recognition for human–computer interaction. IET Computer Vision, 12(1), 3-15. doi:10.1049/iet-cvi.2017.0052Zhang, Z. (2012). Microsoft Kinect Sensor and Its Effect. IEEE Multimedia, 19(2), 4-10. doi:10.1109/mmul.2012.24Rautaray, S. S. (2012). Real Time Hand Gesture Recognition System for Dynamic Applications. International Journal of UbiComp, 3(1), 21-31. doi:10.5121/iju.2012.3103Karim, R. A., Zakaria, N. F., Zulkifley, M. A., Mustafa, M. M., Sagap, I., & Md Latar, N. H. (2013). Telepointer technology in telemedicine : a review. BioMedical Engineering OnLine, 12(1), 21. doi:10.1186/1475-925x-12-21Santos, L., Carbonaro, N., Tognetti, A., González, J., de la Fuente, E., Fraile, J., & Pérez-Turiel, J. (2018). Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies, 6(1), 8. doi:10.3390/technologies6010008Singh, A., Buonassisi, J., & Jain, S. (2014). Autonomous Multiple Gesture Recognition System for Disabled People. International Journal of Image, Graphics and Signal Processing, 6(2), 39-45. doi:10.5815/ijigsp.2014.02.05Ohn-Bar, E., & Trivedi, M. M. (2014). Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations. IEEE Transactions on Intelligent Transportation Systems, 15(6), 2368-2377. doi:10.1109/tits.2014.2337331Khan, S. A., & Engelbrecht, A. P. (2010). A fuzzy particle swarm optimization algorithm for computer communication network topology design. Applied Intelligence, 36(1), 161-177. doi:10.1007/s10489-010-0251-2Abraham, L., Urru, A., Normani, N., Wilk, M., Walsh, M., & O’Flynn, B. (2018). Hand Tracking and Gesture Recognition Using Lensless Smart Sensors. Sensors, 18(9), 2834. doi:10.3390/s18092834Zeng, Q., Kuang, Z., Wu, S., & Yang, J. (2019). A Method of Ultrasonic Finger Gesture Recognition Based on the Micro-Doppler Effect. Applied Sciences, 9(11), 2314. doi:10.3390/app9112314Lien, J., Gillian, N., Karagozler, M. E., Amihood, P., Schwesig, C., Olson, E., … Poupyrev, I. (2016). Soli. ACM Transactions on Graphics, 35(4), 1-19. doi:10.1145/2897824.2925953Sang, Y., Shi, L., & Liu, Y. (2018). Micro Hand Gesture Recognition System Using Ultrasonic Active Sensing. IEEE Access, 6, 49339-49347. doi:10.1109/access.2018.2868268Ferri, J., Lidón-Roger, J., Moreno, J., Martinez, G., & Garcia-Breijo, E. (2017). A Wearable Textile 2D Touchpad Sensor Based on Screen-Printing Technology. Materials, 10(12), 1450. doi:10.3390/ma10121450Nunes, J., Castro, N., Gonçalves, S., Pereira, N., Correia, V., & Lanceros-Mendez, S. (2017). Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications. Sensors, 17(12), 2786. doi:10.3390/s17122786Ferri, J., Perez Fuster, C., Llinares Llopis, R., Moreno, J., & Garcia‑Breijo, E. (2018). Integration of a 2D Touch Sensor with an Electroluminescent Display by Using a Screen-Printing Technology on Textile Substrate. Sensors, 18(10), 3313. doi:10.3390/s18103313Cronin, S., & Doherty, G. (2018). Touchless computer interfaces in hospitals: A review. Health Informatics Journal, 25(4), 1325-1342. doi:10.1177/1460458217748342Haslinger, L., Wasserthal, S., & Zagar, B. G. (2017). P3.1 - A capacitive measurement system for gesture regocnition. Proceedings Sensor 2017. doi:10.5162/sensor2017/p3.1Cherenack, K., & van Pieterson, L. (2012). Smart textiles: Challenges and opportunities. Journal of Applied Physics, 112(9), 091301. doi:10.1063/1.474272

    Wearable Human Computer Interface for control within immersive VAMR gaming environments using data glove and hand gestures

    Get PDF
    The continuous advances in the state-of-the-art in the Virtual, Augmented, and Mixed Reality (V AMR) technology are important in many application spaces, including gaming, entertainment, and media technologies. V AMR is part of the broader Human-Computer Interface (HCI) area focused on providing an unprecedentedly immersive way of interacting with computers. These new ways of interacting with computers can leverage the emerging user input devices. In this paper, we present a demonstrator system that shows how our wearable Virtual Reality (VR) Glove can be used with an off-the-shelf head-mounted VR device, the RealWear HMT-1™. We show how the smart data capture glove can be used as an effective input device to the HMT-1™ to control various devices, such as virtual controls, simply using hand gesture recognition algorithms. We describe our fully functional proof-of-concept prototype, along with the complete system architecture and its ability to scale by incorporating other devices

    A robust method for VR-based hand gesture recognition using density-based CNN

    Get PDF
    Many VR-based medical purposes applications have been developed to help patients with mobility decrease caused by accidents, diseases, or other injuries to do physical treatment efficiently. VR-based applications were considered more effective helper for individual physical treatment because of their low-cost equipment and flexibility in time and space, less assistance of a physical therapist. A challenge in developing a VR-based physical treatment was understanding the body part movement accurately and quickly. We proposed a robust pipeline to understanding hand motion accurately. We retrieved our data from movement sensors such as HTC vive and leap motion. Given a sequence position of palm, we represent our data as binary 2D images of gesture shape. Our dataset consisted of 14 kinds of hand gestures recommended by a physiotherapist. Given 33 3D points that were mapped into binary images as input, we trained our proposed density-based CNN. Our CNN model concerned with our input characteristics, having many 'blank block pixels', 'single-pixel thickness' shape and generated as a binary image. Pyramid kernel size applied on the feature extraction part and classification layer using softmax as loss function, have given 97.7% accuracy

    Roadmap on 3D integral imaging: Sensing, processing, and display

    Get PDF
    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field
    corecore