559 research outputs found

    Towards a context-based Bayesian recognition of transitions in locomotion activities

    Get PDF
    This paper presents a context-based approach for the recognition of transition between activities of daily living (ADLs) using wearable sensor data. A Bayesian method is implemented for the recognition of 7 ADLs with data from two wearable sensors attached to the lower limbs of subjects. A second Bayesian method recognises 12 transitions between the ADLs. The second recognition module uses both, data from wearable sensors and the activity recognised from the first Bayesian module. This approach analyses the next most probable transitions based on wearable sensor data and the context or current activity being performed by the subject. This work was validated using the ENABL3S Database composed of data collected from 7 ADLs and 12 transitions performed by participants walking on two circuits composed of flat surfaces, ascending and descending ramps and stairs. The recognition of activities achieved an accuracy of 98.3%. The recognition of transitions between ADLs achieved an accuracy of 98.8%, which improved the 95.3% accuracy obtained when the context or current activity is not considered for the recognition process. Overall, this work proposes an approach capable of recognising transitions between ADLs, which is required for the development of reliable wearable assistive robots.<br/

    Recognition of human activity and the state of an assembly task using vision and inertial sensor fusion methods

    Get PDF
    Reliable human machine interfaces is key to accomplishing the goals of Industry 4.0. This work proposes the late fusion of a visual recognition and human action recognition (HAR) classifier. Vision is used to recognise the number of screws assembled into a mock part while HAR from body worn Inertial Measurement Units (IMUs) classifies actions done to assemble the part. Convolutional Neural Network (CNN) methods are used in both modes of classification before various late fusion methods are analysed for prediction of a final state estimate. The fusion methods investigated are mean, weighted average, Support Vector Machine (SVM), Bayesian, Artificial Neural Network (ANN) and Long Short Term Memory (LSTM). The results show the LSTM fusion method to perform best, with accuracy of 93% compared to 81% for IMU and 77% for visual sensing. Development of sensor fusion methods such as these is key to reliable Human Machine Interaction (HMI

    Recognition of human activity and the state of an assembly task using vision and inertial sensor fusion methods

    Get PDF
    Reliable human machine interfaces is key to accomplishing the goals of Industry 4.0. This work proposes the late fusion of a visual recognition and human action recognition (HAR) classifier. Vision is used to recognise the number of screws assembled into a mock part while HAR from body worn Inertial Measurement Units (IMUs) classifies actions done to assemble the part. Convolutional Neural Network (CNN) methods are used in both modes of classification before various late fusion methods are analysed for prediction of a final state estimate. The fusion methods investigated are mean, weighted average, Support Vector Machine (SVM), Bayesian, Artificial Neural Network (ANN) and Long Short Term Memory (LSTM). The results show the LSTM fusion method to perform best, with accuracy of 93% compared to 81% for IMU and 77% for visual sensing. Development of sensor fusion methods such as these is key to reliable Human Machine Interaction (HMI

    Predicted information gain and convolutional neural network for prediction of gait periods using a wearable sensors network

    Get PDF
    This work presents a method for recognition of walking activities and prediction of gait periods using wearable sensors. First, a Convolutional Neural Network (CNN) is used to recognise the walking activity and gait period. Second, the output of the CNN is used by a Predicted Information Gain (PIG) method to predict the next most probable gait period while walking. The output of these two processes are combined to adapt the recognition accuracy of the system. This adaptive combination allows us to achieve an optimal recognition accuracy over time. The validation of this work is performed with an array of wearable sensors for the recognition of level-ground walking, ramp ascent and ramp descent, and prediction of gait periods. The results show that the proposed system can achieve accuracies of 100% and 99.9% for recognition of walking activity and gait period, respectively. These results show the benefit of having a system capable of predicting or anticipating the next information or event over time. Overall, this approach offers a method for accurate activity recognition, which is a key process for the development of wearable robots capable of safely assist humans in activities of daily livin

    Wearable fingertip with touch, sliding and vibration feedback for immersive virtual reality

    Get PDF
    Wearable haptic technology plays a key role to enhance the feeling of immersion in virtual reality, telepresence, telehealth and entertainment systems. This work presents a wearable fingertip capable of providing touch, sliding and vibration feedback while the user interacts with virtual objects. This multimodal feedback is applied to the human fingertip using an array of servo motors, a coin vibration motor and 3D printed components. The wearable fingertip uses a 3D printed cylinder that moves up and down to provide touch feedback, and rotates in left and right directions to deliver sliding feedback. The direction of movement and speed of rotation of the cylinder are controlled by the exploration movements performed by the user hand and finger. Vibration feedback is generated using a coin vibration motor with the frequency controlled by the type of virtual material explored by the user. The Leap Motion module is employed to track the human hand and fingers to control the feedback delivered by the wearable device. This work is validated with experiments for exploration of virtual objects in Unity. The experiments show that this wearable haptic device offers an alternative platform with the potential of enhancing the feeling and experience of immersion in virtual reality environments, exploration of objects and telerobotics.</p

    Multimodal barometric and inertial measurement unit based tactile sensor for robot control

    Get PDF
    In this article, we present a low-cost multimodal tactile sensor capable of providing accelerometer, gyroscope, and pressure data using a seven-axis chip as a sensing element. This approach reduces the complexity of the tactile sensor design and collection of multimodal data. The tactile device is composed of a top layer (a printed circuit board (PCB) and a sensing element), a middle layer (soft rubber material), and a bottom layer (plastic base) forming a sandwich structure. This approach allows the measurement of multimodal data when force is applied to different parts of the top layer of the sensor. The multimodal tactile sensor is validated with analyses and experiments in both offline and real-time. First, the spatial impulse response and sensitivity of the sensor are analyzed with accelerometer, gyroscope, and pressure data systematically collected from the sensor. Second, the estimation of contact location from a range of sensor positions and force values is evaluated using accelerometer and gyroscope data together with a convolutional neural network (CNN) method. Third, the estimation of contact location is used to control the position of a robot arm. The results show that the proposed multimodal tactile sensor has the potential for robotic applications, such as tactile perception for robot control, human-robot interaction, and object exploration.</p

    Multimodal barometric and inertial measurement unit based tactile sensor for robot control

    Get PDF
    In this article, we present a low-cost multimodal tactile sensor capable of providing accelerometer, gyroscope, and pressure data using a seven-axis chip as a sensing element. This approach reduces the complexity of the tactile sensor design and collection of multimodal data. The tactile device is composed of a top layer (a printed circuit board (PCB) and a sensing element), a middle layer (soft rubber material), and a bottom layer (plastic base) forming a sandwich structure. This approach allows the measurement of multimodal data when force is applied to different parts of the top layer of the sensor. The multimodal tactile sensor is validated with analyses and experiments in both offline and real-time. First, the spatial impulse response and sensitivity of the sensor are analyzed with accelerometer, gyroscope, and pressure data systematically collected from the sensor. Second, the estimation of contact location from a range of sensor positions and force values is evaluated using accelerometer and gyroscope data together with a convolutional neural network (CNN) method. Third, the estimation of contact location is used to control the position of a robot arm. The results show that the proposed multimodal tactile sensor has the potential for robotic applications, such as tactile perception for robot control, human-robot interaction, and object exploration.</p

    Predicted information gain and convolutional neural network for prediction of gait periods using a wearable sensors network

    Get PDF
    This work presents a method for recognition of walking activities and prediction of gait periods using wearable sensors. First, a Convolutional Neural Network (CNN) is used to recognise the walking activity and gait period. Second, the output of the CNN is used by a Predicted Information Gain (PIG) method to predict the next most probable gait period while walking. The output of these two processes are combined to adapt the recognition accuracy of the system. This adaptive combination allows us to achieve an optimal recognition accuracy over time. The validation of this work is performed with an array of wearable sensors for the recognition of level-ground walking, ramp ascent and ramp descent, and prediction of gait periods. The results show that the proposed system can achieve accuracies of 100% and 99.9% for recognition of walking activity and gait period, respectively. These results show the benefit of having a system capable of predicting or anticipating the next information or event over time. Overall, this approach offers a method for accurate activity recognition, which is a key process for the development of wearable robots capable of safely assist humans in activities of daily livin

    Wearable fingertip with touch, sliding and vibration feedback for immersive virtual reality

    Get PDF
    Wearable haptic technology plays a key role to enhance the feeling of immersion in virtual reality, telepresence, telehealth and entertainment systems. This work presents a wearable fingertip capable of providing touch, sliding and vibration feedback while the user interacts with virtual objects. This multimodal feedback is applied to the human fingertip using an array of servo motors, a coin vibration motor and 3D printed components. The wearable fingertip uses a 3D printed cylinder that moves up and down to provide touch feedback, and rotates in left and right directions to deliver sliding feedback. The direction of movement and speed of rotation of the cylinder are controlled by the exploration movements performed by the user hand and finger. Vibration feedback is generated using a coin vibration motor with the frequency controlled by the type of virtual material explored by the user. The Leap Motion module is employed to track the human hand and fingers to control the feedback delivered by the wearable device. This work is validated with experiments for exploration of virtual objects in Unity. The experiments show that this wearable haptic device offers an alternative platform with the potential of enhancing the feeling and experience of immersion in virtual reality environments, exploration of objects and telerobotics.</p
    • …
    corecore