11 research outputs found

    A least distance estimator for a multivariate regression model using deep neural networks

    Full text link
    We propose a deep neural network (DNN) based least distance (LD) estimator (DNN-LD) for a multivariate regression problem, addressing the limitations of the conventional methods. Due to the flexibility of a DNN structure, both linear and nonlinear conditional mean functions can be easily modeled, and a multivariate regression model can be realized by simply adding extra nodes at the output layer. The proposed method is more efficient in capturing the dependency structure among responses than the least squares loss, and robust to outliers. In addition, we consider L1L_1-type penalization for variable selection, crucial in analyzing high-dimensional data. Namely, we propose what we call (A)GDNN-LD estimator that enjoys variable selection and model estimation simultaneously, by applying the (adaptive) group Lasso penalty to weight parameters in the DNN structure. For the computation, we propose a quadratic smoothing approximation method to facilitate optimizing the non-smooth objective function based on the least distance loss. The simulation studies and a real data analysis demonstrate the promising performance of the proposed method.Comment: Submitted to 'Journal of Statistical Computation and Simulation

    Inertial Measurement Unit Based Upper Extremity Motion Characterization for Action Research Arm Test and Activities of Daily Living

    No full text
    In practical rehabilitation robot development, it is imperative to pre-specify the critical workspace to prevent redundant structure. This study aimed to characterize the upper extremity motion during essential activities in daily living. An IMU-based wearable motion capture system was used to access arm movements. Ten healthy subjects performed the Action Research Arm Test (ARAT) and six pre-selected essential daily activities. The Euler angles of the major joints, and acceleration from wrist and hand sensors were acquired and analyzed. The size of the workspace for the ARAT was 0.53 (left-right) × 0.92 (front-back) × 0.89 (up-down) m for the dominant hand. For the daily activities, the workspace size was 0.71 × 0.70 × 0.86 m for the dominant hand, significantly larger than the non-dominant hand (p ≤ 0.011). The average range of motion (RoM) during ARAT was 109.15 ± 18.82° for elbow flexion/extension, 105.23 ± 5.38° for forearm supination/pronation, 91.99 ± 0.98° for shoulder internal/external rotation, and 82.90 ± 22.52° for wrist dorsiflexion/volarflexion, whereas the corresponding range for daily activities were 120.61 ± 23.64°, 128.09 ± 22.04°, 111.56 ± 31.88°, and 113.70 ± 18.26°. The shoulder joint was more abducted and extended during pinching compared to grasping posture (p < 0.001). Reaching from a grasping posture required approximately 70° elbow extension and 36° forearm supination from the initial position. The study results provide an important database for the workspace and RoM for essential arm movements

    A study on a robot arm driven by three-dimensional trajectories predicted from non-invasive neural signals

    Get PDF
    Background A brain-machine interface (BMI) should be able to help people with disabilities by replacing their lost motor functions. To replace lost functions, robot arms have been developed that are controlled by invasive neural signals. Although invasive neural signals have a high spatial resolution, non-invasive neural signals are valuable because they provide an interface without surgery. Thus, various researchers have developed robot arms driven by non-invasive neural signals. However, robot arm control based on the imagined trajectory of a human hand can be more intuitive for patients. In this study, therefore, an integrated robot arm-gripper system (IRAGS) that is driven by three-dimensional (3D) hand trajectories predicted from non-invasive neural signals was developed and verified. Methods The IRAGS was developed by integrating a six-degree of freedom robot arm and adaptive robot gripper. The system was used to perform reaching and grasping motions for verification. The non-invasive neural signals, magnetoencephalography (MEG) and electroencephalography (EEG), were obtained to control the system. The 3D trajectories were predicted by multiple linear regressions. A target sphere was placed at the terminal point of the real trajectories, and the system was commanded to grasp the target at the terminal point of the predicted trajectories. Results The average correlation coefficient between the predicted and real trajectories in the MEG case was 0.705 ± 0.292(p <0.001). In the EEG case, it was 0.684 ± 0.309(p < 0.001). The success rates in grasping the target plastic sphere were 18.75 and 7.50 % with MEG and EEG, respectively. The success rates of touching the target were 52.50 and 58.75 % respectively. Conclusions A robot arm driven by 3D trajectories predicted from non-invasive neural signals was implemented, and reaching and grasping motions were performed. In most cases, the robot closely approached the target, but the success rate was not very high because the non-invasive neural signal is less accurate. However the success rate could be sufficiently improved for practical applications by using additional sensors. Robot arm control based on hand trajectories predicted from EEG would allow for portability, and the performance with EEG was comparable to that with MEG

    Vision-aided brain–machine interface training system for robotic arm control and clinical application on two patients with cervical spinal cord injury

    Get PDF
    Abstract Background While spontaneous robotic arm control using motor imagery has been reported, most previous successful cases have used invasive approaches with advantages in spatial resolution. However, still many researchers continue to investigate methods for robotic arm control with noninvasive neural signal. Most of noninvasive control of robotic arm utilizes P300, steady state visually evoked potential, N2pc, and mental tasks differentiation. Even though these approaches demonstrated successful accuracy, they are limited in time efficiency and user intuition, and mostly require visual stimulation. Ultimately, velocity vector construction using electroencephalography activated by motion-related motor imagery can be considered as a substitution. In this study, a vision-aided brain–machine interface training system for robotic arm control is proposed and developed. Methods The proposed system uses a Microsoft Kinect to detect and estimates the 3D positions of the possible target objects. The predicted velocity vector for robot arm input is compensated using the artificial potential to follow an intended one among the possible targets. Two participants with cervical spinal cord injury trained with the system to explore its possible effects. Results In a situation with four possible targets, the proposed system significantly improved the distance error to the intended target compared to the unintended ones (p < 0.0001). Functional magnetic resonance imaging after five sessions of observation-based training with the developed system showed brain activation patterns with tendency of focusing to ipsilateral primary motor and sensory cortex, posterior parietal cortex, and contralateral cerebellum. However, shared control with blending parameter α less than 1 was not successful and success rate for touching an instructed target was less than the chance level (= 50%). Conclusions The pilot clinical study utilizing the training system suggested potential beneficial effects in characterizing the brain activation patterns

    Differential kinematic features of the hyoid bone during swallowing in patients with Parkinson&apos;s disease

    No full text
    This study aimed to investigate spatiotemporal characteristics of the hyoid bone during swallowing in patients with Parkinson&apos;s disease (PD) and dysphagia. Spatiotemporal data of the hyoid bone was obtained from video-fluoroscopic images of 69 subjects (23 patients with PD, 23 age- and sex-matched healthy elderly controls, and 23 healthy young controls). Normalized profiles of displacement/velocity were analyzed during different periods (percentile) of swallowing using functional regression analysis, and the maximal values were compared between the groups. Maximal horizontal displacement and velocity were significantly decreased during the initial backward (P = 0.006 and P &lt; 0.001, respectively) and forward (P = 0.008 and P &lt; 0.001, respectively) motions in PD patients compared to elderly controls. Maximal vertical velocity was significantly lower in PD patients than in elderly controls (P = 0.001). No significant difference was observed in maximal displacement and velocity in both horizontal and vertical planes between the healthy elderly and young controls, although horizontal displacement was significantly decreased during the forward motion (51st-57th percentiles) in the elderly controls. In conclusion, reduced horizontal displacement and velocity of the hyoid bone during the forward motion would be due to combined effects of disease and aging, whereas those over the initial backward motion may be considered specific to patients with PD

    Exploring user perspectives on a robotic arm with brain-machine interface: A qualitative focus group study

    No full text
    Brain-machine Interface (BMI) is a system that translates neuronal data into an output variable to control external devices such as a robotic arm. A robotic arm can be used as an assistive living device for individuals with tetraplegia. To reflect users&apos; needs in the development process of the BMI robotic arm, our team followed an interactive approach to system development, human-centered design, and Human Activity Assistive Technology model. This study aims to explore the perspectives of people with tetraplegia about activities they want to participate in, their opinions, and the usability of the BMI robotic arm. Eight people with tetraplegia participated in a focus group interview in a semistructured interview format. A general inductive analysis method was used to analyze the qualitative data. The 3 overarching themes that emerged from this analysis were: 1) activities, 2) acceptance, and 3) usability. Activities that the users wanted to do using the robotic arm were categorized into the following 5 activity domains: activities of daily living (ADL), instrumental ADL, health management, education, and leisure. Participants provided their opinions on the needs and acceptance of the BMI technology. Participants answered usability and expected standards of the BMI robotic arm within 7 categories such as accuracy, setup, cost, etc. Participants with tetraplegia have a strong interest in the robotic arm and BMI technology to restore their mobility and independence. Creating BMI features appropriate to users&apos; needs, such as safety and high accuracy, will be the key to acceptance. These findings from the perspectives of potential users should be taken into account when developing the BMI robotic arm.N
    corecore