20 research outputs found

    Data fusion for human motion tracking with multimodal sensing

    Get PDF
    Multimodal sensor fusion is a common approach in the design of many motion tracking systems. It is based on using more than one sensor modality to measure different aspects of a phenomenon and capture more information about it than what would be available otherwise from a single sensor. Multimodal sensor fusion algorithms often leverage the complementary nature of the different modalities to compensate for shortcomings of the individual sensor modalities. This approach is particularly suitable for low-cost and highly miniaturised wearable human motion tracking systems that are expected to perform their function with limited resources at their disposal (energy, processing power, etc.). Opto-inertial motion trackers are some of the most commonly used approaches in this context. These trackers fuse the sensor data from vision and Inertial Motion Unit (IMU) sensors to determine the 3-Dimensional (3-D) pose of the given body part, i.e. its position and orientation. The continuous advances in the State-Of-the-Art (SOA) in camera miniaturisation and efficient point detection algorithms along with the more robust IMUs and increasing processing power in a shrinking form factor, make it increasingly feasible to develop a low-cost, low-power, and highly miniaturised wearable smart sensor human motion tracking system. It incorporates these two sensor modalities. In this thesis, a multimodal human motion tracking system is presented that builds on these developments. The proposed system consists of a wearable smart sensor system, referred to as Wearable Platform (WP), which incorporates the two sensor modalities, i.e. monocular camera (optical) and IMU (motion). The WP operates in conjunction with two optical points of reference embedded in the ambient environment to enable positional tracking in that environment. In addition, a novel multimodal sensor fusion algorithm is proposed which uses the complementary nature of the vision and IMU sensors in conjunction with the two points of reference in the ambient environment, to determine the 3-D pose of the WP in a novel and computationally efficient way. To this end, the WP uses a low-resolution camera to track two points of reference; specifically two Infrared (IR) LEDs embedded in the wall. The geometry that is formed between the WP and the IR LEDs, when complemented by the angular rotation measured by the IMU, simplifies the mathematical formulations involved in the computing the 3-D pose, making them compatible with the resource-constrained microprocessors used in such wearable systems. Furthermore, the WP is coupled with the two IR LEDs via a radio link to control their intensity in real-time. This enables the novel subpixel point detection algorithm to maintain its highest accuracy, thus increasing the overall precision of the pose detection algorithm. The resulting 3-D pose can be used as an input to a higher-level system for further use. One of the potential uses for the proposed system is in sports applications. For instance, it could be particularly useful for tracking the correctness of executing certain exercises in Strength Training (ST) routines, such as the barbell squat. Thus, it can be used to assist professional ST coaches in remotely tracking the progress of their clients, and most importantly ensure a minimum risk of injury through real-time feedback. Despite its numerous benefits, the modern lifestyle has a negative impact on our health due to an increasingly sedentary lifestyle that it involves. The human body has evolved to be physically active. Thus, these lifestyle changes need to be offset by the addition of regular physical activity to everyday life, of which ST is an important element. This work describes the following novel contributions: • A new multimodal sensor fusion algorithm for 3-D pose detection with reduced mathematical complexity for resource-constrained platforms • A novel system architecture for efficient 3-D pose detection for human motion tracking applications • A new subpixel point detection algorithm for efficient and precise point detection at reduced camera resolution • A new reference point estimation algorithm for finding locations of reference points used in validating subpixel point detection algorithms • A novel proof-of-concept demonstrator prototype that implements the proposed system architecture and multimodal sensor fusion algorith

    Sub-pixel point detection algorithm for point tracking with low-power wearable camera systems: a simplified linear interpolation

    Get PDF
    With the continuous developments in vision sensor technology, highly miniaturized low-power and wearable vision sensing is becoming a reality. Several wearable vision applications exist which involve point tracking. The ability to efficiently detect points at a sub-pixel level can be beneficial, as the accuracy of point detection is no longer limited to the resolution of the vision sensor. In this work, we propose a novel Simplified Linear Interpolation (SLI) algorithm that achieves high computational efficiency, which outperforms existing algorithms in terms of the accuracy under certain conditions. We present the principles underlying our algorithm and evaluate it in a series of test scenarios. Its performance is finally compared to similar algorithms currently available in the literature

    Reference point estimation technique for direct validation of subpixel point detection algorithms for Internet of Things

    Get PDF
    Subpixel point detection algorithms are important in many application spaces, especially those where limitations of the imaging device's resolution need to be overcome. Such algorithms help decrease the overall requirements of the given system. Many factors, such as power consumption and cost, are critical in the context of the Internet of Things. While these algorithms do offer an improvement in the precision of point detection, it is often difficult to directly determine their precision. The main reason for it is the lack of the point of reference that the outputs of subpixel point detection methods can be compared to. In this work, we present a novel method for finding the point of reference for validating the subpixel point detection algorithms directly. Its operation is demonstrated on an experimentally obtained sample dataset

    3D ranging and tracking using lensless smart sensors

    Get PDF
    Target tracking has a wide range of applications in Internet of Things (IoT), such as smart city sensors, indoor tracking, and gesture recognition. Several studies have been conducted in this area. Most of the published works either use vision sensors or inertial sensors for motion analysis and gesture recognition [1, 2]. Recent works use a combination of depth sensors and inertial sensors for 3D ranging and tracking [3, 4]. This often requires complex hardware and the use of complex embedded algorithms. Stereo cameras or Kinect depth sensors used for high precision ranging are instead expensive and not easy to use. The aim of this work is to track in 3D a hand fitted with a series of precisely positioned IR LEDs using a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. [5, 6]. In the adopted device, the lens used in conventional cameras is replaced by low-cost ultra-miniaturized diffraction optics attached directly to the image sensor array. The unique diffraction pattern enables more precise position tracking than possible with a lens by capturing more information about the scene

    Multimodal sensor fusion for low-power wearable human motion tracking systems in sports applications

    Get PDF
    This paper presents a prototype human motion tracking system for wearable sports applications. It can be particularly applicable for tracking human motion during executing certain strength training exercises, such as the barbell squat, where an inappropriate technique could result in an injury. The key novelty of the proposed system is twofold. Firstly, it is an inside-out, multimodal, motion tracker that incorporates two complementary sensor modalities, i.e. a camera and an inertial motion sensor, as well as two externally-mounted points of reference. Secondly, it incorporates a novel multimodal sensor fusion algorithm which uses the complementary nature of vision and inertial sensor modalities to perform a computationally efficient 3-Dimensional (3-D) pose detection of the wearable device. The 3-D pose is determined by fusing information about the two external reference points captured by the camera together with the orientation angles captured by the inertial motion sensor. The accuracy of the prototype was experimentally validated in laboratory conditions. The main findings are as follows. The Root Mean Square Error (RMSE) in 3-D position calculation was 36.7 mm and 13.6 mm in the static and mobile cases, respectively. Whereas the static case was aimed at determining the system’s performance at all 3-D poses within the work envelope, the mobile case was used to determine the error in tracking human motion that is involved in the barbell squat, i.e. a mainly repeated vertical motion pattern

    Low cost embedded multimodal opto-inertial human motion tracking system

    Get PDF
    Human motion tracking systems are widely used in various application spaces, such as motion capture, rehabilitation, or sports. There exists a number of such systems in the State-Of-The-Art (SOA) that vary in price, complexity, accuracy and the target applications. With the continued advances in system integration and miniaturization, wearable motion trackers gain in popularity in the research community. The opto-inertial trackers with multimodal sensor fusion algorithms are some of the common approaches found in SOA. However, these trackers tend to be expensive and have high computational requirements. In this work, we present a prototype version of our opto-inertial, motion tracking system that offers a low-cost alternative. The 3D position and orientation are determined by fusing optical and inertial sensor data together with knowledge about two external reference points using a purpose-designed data fusion algorithm. An experimental validation was carried out on one of the use cases that this system is intended for, i.e. barbell squat in strength training. The results showed that the total RMSE in position and orientation was 32.8 mm and 0.89 degree, respectively. It operated in real-time at 20 frames per second

    Wearable Human Computer Interface for control within immersive VAMR gaming environments using data glove and hand gestures

    Get PDF
    The continuous advances in the state-of-the-art in the Virtual, Augmented, and Mixed Reality (V AMR) technology are important in many application spaces, including gaming, entertainment, and media technologies. V AMR is part of the broader Human-Computer Interface (HCI) area focused on providing an unprecedentedly immersive way of interacting with computers. These new ways of interacting with computers can leverage the emerging user input devices. In this paper, we present a demonstrator system that shows how our wearable Virtual Reality (VR) Glove can be used with an off-the-shelf head-mounted VR device, the RealWear HMT-1â„¢. We show how the smart data capture glove can be used as an effective input device to the HMT-1â„¢ to control various devices, such as virtual controls, simply using hand gesture recognition algorithms. We describe our fully functional proof-of-concept prototype, along with the complete system architecture and its ability to scale by incorporating other devices

    A novel resource-constrained insect monitoring system based on machine vision with edge AI

    Get PDF
    Effective insect pest monitoring is a vital component of Integrated Pest Management (IPM) strategies. It helps to support crop productivity while minimising the need for plant protection products. In recent years, many researchers have considered the integration of intelligence into such systems in the context of the Smart Agriculture research agenda. This paper describes the development of a smart pest monitoring system, developed in accordance with specific requirements associated with the agricultural sector. The proposed system is a low-cost smart insect trap, for use in orchards, that detects specific insect species that are detrimental to fruit quality. The system helps to identify the invasive insect, Brown Marmorated Stink Bug (BMSB) or Halyomorpha halys (HH) using a Microcontroller Unit-based edge device comprising of an Internet of Things enabled, resource-constrained image acquisition and processing system. It is used to execute our proposed lightweight image analysis algorithm and Convolutional Neural Network (CNN) model for insect detection and classification, respectively. The prototype device is currently deployed in an orchard in Italy. The preliminary experimental results show over 70 percent of accuracy in BMSB classification on our custom-built dataset, demonstrating the proposed system feasibility and effectiveness in monitoring this invasive insect species

    Point tracking with lensless smart sensors

    Get PDF
    This paper presents the applicability of a novel Lensless Smart Sensor (LSS) developed by Rambus, Inc. in 3D positioning and tracking. The unique diffraction pattern attached to the sensor enables more precise position tracking than possible with lenses by capturing more information about the scene. In this work, the sensor characteristics is assessed and accuracy analysis is accomplished for the single point tracking scenario

    Hand tracking and gesture recognition using lensless smart sensors

    Get PDF
    The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated computational algorithms, allow point tracking down to millimeter-level accuracy. This work is focused on developing novel algorithms for the detection of multiple points and thereby enabling hand tracking and gesture recognition using the LSS. The algorithms are formulated based on geometrical and mathematical constraints around the placement of infrared light-emitting diodes (LEDs) on the hand. The developed techniques dynamically adapt the recognition and orientation of the hand and associated gestures. A detailed accuracy analysis for both hand tracking and gesture classification as a function of LED positions is conducted to validate the performance of the system. Our results indicate that the technology is a promising approach, as the current state-of-the-art focuses on human motion tracking that requires highly complex and expensive systems. A wearable, low-power, low-cost system could make a significant impact in this field, as it does not require complex hardware or additional sensors on the tracked segments
    corecore