7 research outputs found

    3D ranging and tracking using lensless smart sensors

    Get PDF
    Target tracking has a wide range of applications in Internet of Things (IoT), such as smart city sensors, indoor tracking, and gesture recognition. Several studies have been conducted in this area. Most of the published works either use vision sensors or inertial sensors for motion analysis and gesture recognition [1, 2]. Recent works use a combination of depth sensors and inertial sensors for 3D ranging and tracking [3, 4]. This often requires complex hardware and the use of complex embedded algorithms. Stereo cameras or Kinect depth sensors used for high precision ranging are instead expensive and not easy to use. The aim of this work is to track in 3D a hand fitted with a series of precisely positioned IR LEDs using a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. [5, 6]. In the adopted device, the lens used in conventional cameras is replaced by low-cost ultra-miniaturized diffraction optics attached directly to the image sensor array. The unique diffraction pattern enables more precise position tracking than possible with a lens by capturing more information about the scene

    Point tracking with lensless smart sensors

    Get PDF
    This paper presents the applicability of a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. in 3D positioning and tracking. The unique diffraction pattern attached to the sensor enables more precise position tracking than possible with lenses by capturing more information about the scene. In this work, the sensor characteristics is assessed and accuracy analysis is accomplished for the single point tracking scenario

    Hand tracking and gesture recognition using lensless smart sensors

    Get PDF
    The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated computational algorithms, allow point tracking down to millimeter-level accuracy. This work is focused on developing novel algorithms for the detection of multiple points and thereby enabling hand tracking and gesture recognition using the LSS. The algorithms are formulated based on geometrical and mathematical constraints around the placement of infrared light-emitting diodes (LEDs) on the hand. The developed techniques dynamically adapt the recognition and orientation of the hand and associated gestures. A detailed accuracy analysis for both hand tracking and gesture classification as a function of LED positions is conducted to validate the performance of the system. Our results indicate that the technology is a promising approach, as the current state-of-the-art focuses on human motion tracking that requires highly complex and expensive systems. A wearable, low-power, low-cost system could make a significant impact in this field, as it does not require complex hardware or additional sensors on the tracked segments

    A machine learning approach for gesture recognition with a lensless smart sensor system

    Get PDF
    Hand motion tracking traditionally requires highly complex and expensive systems in terms of energy and computational demands. A low-power, low-cost system could lead to a revolution in this field as it would not require complex hardware while representing an infrastructure-less ultra-miniature (~ 100ÎĽm - [1]) solution. The present paper exploits the Multiple Point Tracking algorithm developed at the Tyndall National Institute as the basic algorithm to perform a series of gesture recognition tasks. The hardware relies upon the combination of a stereoscopic vision of two novel Lensless Smart Sensors (LSS) combined with IR filters and five hand-held LEDs to track. Tracking common gestures generates a six-gestures dataset, which is then employed to train three Machine Learning models: k-Nearest Neighbors, Support Vector Machine and Random Forest. An offline analysis highlights how different LEDs' positions on the hand affect the classification accuracy. The comparison shows how the Random Forest outperforms the other two models with a classification accuracy of 90-91 %

    Preliminary Classification of Selected Farmland Habitats in Ireland Using Deep Neural Networks

    No full text
    Ireland has a wide variety of farmlands that includes arable fields, grassland, hedgerows, streams, lakes, rivers, and native woodlands. Traditional methods of habitat identification rely on field surveys, which are resource intensive, therefore there is a strong need for digital methods to improve the speed and efficiency of identification and differentiation of farmland habitats. This is challenging because of the large number of subcategories having nearly indistinguishable features within the habitat classes. Heterogeneity among sites within the same habitat class is another problem. Therefore, this research work presents a preliminary technique for accurate farmland classification using stacked ensemble deep convolutional neural networks (DNNs). The proposed approach has been validated on a high-resolution dataset collected using drones. The image samples were manually labelled by the experts in the area before providing them to the DNNs for training purposes. Three pre-trained DNNs customized using the transfer learning approach are used as the base learners. The predicted features derived from the base learners were then used to train a DNN based meta-learner to achieve high classification rates. We analyse the obtained results in terms of convergence rate, confusion matrices, and ROC curves. This is a preliminary work and further research is needed to establish a standard technique
    corecore