138 research outputs found

    Non-overlapping dual camera fall detection using the NAO humanoid robot

    Get PDF
    With an aging population and a greater desire for independence, the dangers of falling incidents in the elderly have become particularly pronounced. In light of this, several technologies have been developed with the aim of preventing or monitoring falls. Failing to strike the balance between several factors including reliability, complexity and invasion of privacy has seen prohibitive in the uptake of these systems. Some systems rely on cameras being mounted in all rooms of a user's home while others require being worn 24 hours a day. This paper explores a system using a humanoid NAO robot with dual vertically mounted cameras to perform the task of fall detection

    A study on iris textural correlation using steering kernels

    Get PDF
    Research on iris recognition have observed that iris texture has inherent radial correlation. However, currently, there lacks a deeper insight into iris textural correlation. Few research focus on a quantitative and comprehensive analysis on this correlation. In this paper, we perform a quantitative analysis on iris textural correlation. We employ steering kernels to model the textural correlation in images. We conduct experiments on three benchmark datasets covering iris captures with varying quality. We find that the local textural correlation varies due to local characteristics in iris images, while the general trend of textural correlation goes along the radial direction. Moreover, we demonstrate that the information on iris textural correlation can be utilized to improve iris recognition. We employ this information to produce iris codes. We show that the iris code with the information on textural correlation achieves an improved performance compared to traditional iris codes

    Signal-Level Information Fusion for Less Constrained Iris Recognition using Sparse-Error Low Rank Matrix Factorization

    Get PDF
    Iris recognition systems working in less constrained environments with the subject at-a-distance and on-the-move suffer from the noise and degradations in the iris captures. These noise and degradations significantly deteriorate iris recognition performance. In this paper, we propose a novel signal-level information fusion method to mitigate the influence of noise and degradations for less constrained iris recognition systems. The proposed method is based on low rank approximation (LRA). Given multiple noisy captures of the same eye, we assume that: 1) the potential noiseless images lie in a low rank subspace and 2) the noise is spatially sparse. Based on these assumptions, we seek an LRA of noisy captures to separate the noiseless images and noise for information fusion. Specifically, we propose a sparse-error low rank matrix factorization model to perform LRA, decomposing the noisy captures into a low rank component and a sparse error component. The low rank component estimates the potential noiseless images, while the error component models the noise. Then, the low rank and error components are utilized to perform signal-level fusion separately, producing two individually fused images. Finally, we combine the two fused images at the code level to produce one iris code as the final fusion result. Experiments on benchmark data sets demonstrate that the proposed signal-level fusion method is able to achieve a generally improved iris recognition performance in less constrained environment, in comparison with the existing iris recognition algorithms, especially for the iris captures with heavy noise and low quality

    Optimal Generation of Iris Codes for Iris Recognition

    Get PDF
    The calculation of binary iris codes from feature values (e.g. the result of Gabor transform) is a key step in iris recognition systems. Traditional binarization method based on the sign of feature values has achieved very promising performance. However, currently, little research focuses on a deeper insight into this binarization method to produce iris codes. In this paper, we illustrate the iris code calculation from the perspective of optimization. We demonstrate that the traditional iris code is the solution of an optimization problem which minimizes the distance between the feature values and iris codes. Furthermore, we show that more effective iris codes can be obtained by adding terms to the objective function of this optimization problem. We investigate two additional objective terms. The first objective term exploits the spatial relationships of the bits in different positions of an iris code. The second objective term mitigates the influence of less reliable bits in iris codes. The two objective terms can be applied to the optimization problem individually, or in a combined scheme. We conduct experiments on four benchmark datasets with varying image quality. The experimental results demonstrate that the iris code produced by solving the optimization problem with the two additional objective terms achieves a generally improved performance in comparison to the traditional iris code calculated by binarizing feature values based on their signs

    Design and Control of a Single-Leg Exoskeleton with Gravity Compensation for Children with Unilateral Cerebral Palsy

    Get PDF
    Children with cerebral palsy (CP) experience reduced quality of life due to limited mobility and independence. Recent studies have shown that lower-limb exoskeletons (LLEs) have significant potential to improve the walking ability of children with CP. However, the number of prototyped LLEs for children with CP is very limited, while no single-leg exoskeleton (SLE) has been developed specifically for children with CP. This study aims to fill this gap by designing the first size-adjustable SLE for children with CP aged 8 to 12, covering Gross Motor Function Classification System (GMFCS) levels I to IV. The exoskeleton incorporates three active joints at the hip, knee, and ankle, actuated by brushless DC motors and harmonic drive gears. Individuals with CP have higher metabolic consumption than their typically developed (TD) peers, with gravity being a significant contributing factor. To address this, the study designed a model-based gravity-compensator impedance controller for the SLE. A dynamic model of user and exoskeleton interaction based on the Euler–Lagrange formulation and following Denavit–Hartenberg rules was derived and validated in Simscape™ and Simulink® with remarkable precision. Additionally, a novel systematic simplification method was developed to facilitate dynamic modelling. The simulation results demonstrate that the controlled SLE can improve the walking functionality of children with CP, enabling them to follow predefined target trajectories with high accuracy

    Pediatric Robotic Lower-Limb Exoskeleton: An Innovative Design and Kinematic Analysis

    Get PDF
    Lower-limb exoskeletons enhance motor function in patients, benefiting both clinical rehab and daily activities. Nevertheless, pediatric exoskeletons remain largely underdeveloped. To address this gap, this study presents a new robotic lower-limb exoskeleton (LLE) design specifically tailored for children. Utilizing anthropometric data from the target demographic, the LLE has a size-adjustable design to accommodate children aged 8 to 12. The design incorporates six active joints at the hip and knee, actuated using Brushless DC motors in conjunction with Harmonic Drive gears. This study conducts a rigorous analysis of forward and inverse kinematics applied to the robotic LLE. While forward kinematics are essential for dynamic modeling and model-based control formulation, inverse kinematics play a crucial role in facilitating balance control. The study uses an algebraic-geometric method to solve the inverse kinematics of LLEs with four DOFs per leg, including one in the frontal plane and three in the sagittal plane. A unique model of validation and verification is then employed using the Simulink® and Simscape™ computational environments. The accuracy of the forward kinematic analysis is confirmed by comparing separately modeled outcomes in both environments. The validity of the inverse kinematic model is verified by implementing sequential forward and inverse kinematic analyses, comparing the forward kinematic inputs with inverse kinematic outputs. Simulation results conclusively validate both the forward and inverse kinematic analyses, suggesting the exoskeleton’s potential in accommodating standard gait patterns

    Train vs. play: Evaluating the effects of gamified and non-gamified wheelchair skills training using virtual reality

    Get PDF
    This study compares the influence of a gamified and a non-gamified virtual reality (VR) environment on wheelchair skills training. In specific, the study explores the integration of gamification elements and their influence on wheelchair driving performance in VR-based training. Twenty-two non-disabled participants volunteered for the study, of whom eleven undertook the gamified VR training, and eleven engaged in the non-gamified VR training. To measure the efficacy of the VR-based wheelchair skills training, we captured the heart rate (HR), number of joystick movements, completion time, and number of collisions. In addition, an adapted version of the Wheelchair Skills Training Program Questionnaire (WSTP-Q), the Igroup Presence Questionnaire (IPQ), and the Simulator Sickness Questionnaire (SSQ) questionnaires were administered after the VR training. The results showed no differences in wheelchair driving performance, the level of involvement, or the ratings of presence between the two environments. In contrast, the perceived cybersickness was statistically higher for the group of participants who trained in the non-gamified VR environment. Remarkably, heightened cybersickness symptoms aligned with increased HR, suggesting physiological connections. As such, while direct gamification effects on the efficacy of VR-based wheelchair skills training were not statistically significant, its potential to amplify user engagement and reduce cybersickness is evident

    A pixel-wise annotated dataset of small overlooked indoor objects for semantic segmentation applications

    Get PDF
    The purpose of the dataset is to provide annotated images for pixel classification tasks with application to powered wheelchair users. As some of the widely available datasets contain only general objects, we introduced this dataset to cover the missing pieces, which can be considered as application-specific objects. However, these objects of interest are not only important for powered wheelchair users but also for indoor navigation and environmental understanding in general. For example, indoor assistive and service robots need to comprehend their surroundings to ease navigation and interaction with different size objects. The proposed dataset is recorded using a camera installed on a powered wheelchair. The camera is installed beneath the joystick so that it can have a clear vision with no obstructions from the user's body or legs. The powered wheelchair is then driven through the corridors of the indoor environment, and a one-minute video is recorded. The collected video is annotated on the pixel level for semantic segmentation (pixel classification) tasks. Pixels of different objects are annotated using MATLAB software. The dataset has various object sizes (small, medium, and large), which can explain the variation of the pixel's distribution in the dataset. Usually, Deep Convolutional Neural Networks (DCNNs) that perform well on large-size objects fail to produce accurate results on small-size objects. Whereas training a DCNN on a multi-size objects dataset can build more robust systems. Although the recorded objects are vital for many applications, we have included more images of different kinds of door handles with different angles, orientations, and illuminations as they are rare in the publicly available datasets. The proposed dataset has 1549 images and covers nine different classes. We used the dataset to train and test a semantic segmentation system that can aid and guide visually impaired users by providing visual cues

    Analysing the Impact of Vibrations on Smart Wheelchair Systems and Users

    Get PDF
    Mechanical vibrations due to uneven terrains can significantly impact the accuracy of computer vision systems installed on any moving vehicle. In this study, we investigate the impact of mechanical vibrations induced using artificial bumps in a controlled environment on the performance of smart computer vision systems installed on an Electrical powered Wheelchair (EPW). Besides, the impact of the vibrations on the user's health and comfort is quantified using the vertical acceleration of an Inertial Measurement Unit (IMU) sensor according to the ISO standard 2631. The proposed smart computer vision system is a semantic segmentation based on deep learning for pixels classification that provides environmental cues for visually impaired users to facilitate safe and independent navigation. In addition, it provides the EPW user with the estimated distance to objects of interest. Results show that a high level of vibrations can negatively impact the performance of the computer vision system installed on powered wheelchairs. Also, high levels of whole-body vibrations negatively impact the user's health and comfort
    • …
    corecore