134 research outputs found

    Lifetime Risk of Blindness in Open-Angle Glaucoma.

    Get PDF
    To determine the lifetime risk and duration of blindness in patients with manifest open-angle glaucoma (OAG)

    Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

    Full text link
    Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be predicted. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers which combine multiple modalities outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty.Comment: Submitted to IROS 201

    Effects of Argon Laser Trabeculoplasty in the Early Manifest Glaucoma Trial.

    Get PDF
    PURPOSE: To analyze reduction of intraocular pressure (IOP) by argon laser trabeculoplasty (ALT) in the Early Manifest Glaucoma Trial and factors influencing the effect of such treatment. DESIGN: Cohort study based on 127 patients from the treatment group of the Early Manifest Glaucoma Trial, a randomized clinical trial. METHODS: Patients randomized to the treatment arm of the Early Manifest Glaucoma Trial received a standard treatment protocol (topical betaxolol hydrochloride followed by 360-degree ALT) and then were followed up prospectively at 3-month intervals for up to 8 years. One eye per patient was included in the analyses. We investigated the relationship between IOP before ALT and subsequent IOP reduction and other factors that might have influenced the effect of ALT, including stage of the disease, trabecular pigmentation, presence of exfoliation syndrome, and treating surgeon. RESULTS: The mean ± standard deviation IOP before ALT and after betaxolol treatment was 18.1 ± 3.9 mm Hg, and the mean ± standard deviation short-term IOP reduction 3 months after ALT was 2.8 ± 3.9 mm Hg (12.6 ± 20.5%). The IOP before ALT strongly affected IOP reduction (P < .001); each 3-mm Hg higher IOP before ALT value was associated with an additional mean IOP reduction of approximately 2 mm Hg. The treating surgeons also had a significant impact on IOP reduction (P = 0.001), with mean values ranging from 5.8 to -1.3 mm Hg. CONCLUSIONS: In this cohort, which included many patients with low IOP levels, IOP before ALT markedly influenced the IOP reduction induced by ALT, seen as a much larger decrease in eyes with higher IOP before ALT. The treating surgeon also had a significant impact on ALT outcome

    MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

    Full text link
    Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI). Learning from Demonstration (LfD) methods from Human-Human Interactions (HHI) have shown promising results, especially when coupled with representation learning techniques. However, such methods for learning HRI either do not scale well to high dimensional data or cannot accurately adapt to changing via-poses of the interacting partner. We propose Multimodal Interactive Latent Dynamics (MILD), a method that couples deep representation learning and probabilistic machine learning to address the problem of two-party physical HRIs. We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE). Our experimental evaluations for learning HRI from HHI demonstrations show that MILD effectively captures the multimodality in the latent representations of HRI tasks, allowing us to decode the varying dynamics occurring in such tasks. Compared to related work, MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory. Notably, MILD can learn directly from camera-based pose estimations to generate trajectories, which we then map to a humanoid robot without the need for any additional training.Comment: Accepted at the IEEE-RAS International Conference on Humanoid Robots (Humanoids) 202

    Kidney function in the very elderly with hypertension: data from the hypertension in the very elderly (HYVET) trial.

    No full text
    BACKGROUND: numerous reports have linked impaired kidney function to a higher risk of cardiovascular events and mortality. There are relatively few data relating to kidney function in the very elderly. METHODS: the Hypertension in the Very Elderly Trial (HYVET) was a randomised placebo-controlled trial of indapamide slow release 1.5mg ± perindopril 2-4 mg in those aged ≥80 years with sitting systolic blood pressures of ≥160 mmHg and diastolic pressures of <110 mmHg. Kidney function was a secondary outcome. RESULTS: HYVET recruited 3,845 participants. The mean baseline estimated glomerular filtration rate (eGFR) was 61.7 ml/min/1.73 m(2). When categories of the eGFR were examined, there was a possible U-shaped relationship between eGFR, total mortality, cardiovascular mortality and events. The nadir of the U was the eGFR category ≥60 and <75 ml/min/1.73 m(2). Using this as a comparator, the U shape was clearest for cardiovascular mortality with the eGFR <45 ml/min/1.73 m(2) and ≥75 ml/min/1.73 m(2) showing hazard ratios of 1.88 (95% CI: 1.2-2.96) and 1.36 (0.94-1.98) by comparison. Proteinuria at baseline was also associated with an increased risk of later heart failure events and mortality. CONCLUSIONS: although these results should be interpreted with caution, it may be that in very elderly individuals with hypertension both low and high eGFR indicate increased risk

    Learning Multimodal Latent Dynamics for Human-Robot Interaction

    Full text link
    This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI). We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents. We leverage the interaction dynamics learned from HHI to learn HRI and incorporate the conditional generation of robot motions from human observations into the training, thereby predicting more accurate robot trajectories. The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability. For contact-rich interactions, we modulate the robot's stiffness using HMM segmentation for a compliant interaction. We verify the effectiveness of our approach deployed on a Humanoid robot via a user study. Our method generalizes well to various humans despite being trained on data from just two humans. We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.Comment: 20 Pages, 10 Figure

    ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition

    Get PDF
    The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots' joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots

    Learning Coupled Forward-Inverse Models with Combined Prediction Errors

    Get PDF
    Challenging tasks in unstructured environments require robots to learn complex models. Given a large amount of information, learning multiple simple models can offer an efficient alternative to a monolithic complex network. Training multiple models-that is, learning their parameters and their responsibilities-has been shown to be prohibitively hard as optimization is prone to local minima. To efficiently learn multiple models for different contexts, we thus develop a new algorithm based on expectation maximization (EM). In contrast to comparable concepts, this algorithm trains multiple modules of paired forward-inverse models by using the prediction errors of both forward and inverse models simultaneously. In particular, we show that our method yields a substantial improvement over only considering the errors of the forward models on tasks where the inverse space contains multiple solutions

    Hydroxycarboxylic acid receptor 3 and GPR84: Two metabolite-sensing G protein-coupled receptors with opposing functions in innate immune cells

    Get PDF
    G protein-coupled receptors (GPCRs) are key regulatory proteins of immune cell function inducing signaling in response to extracellular (pathogenic) stimuli. Although unrelated, hydroxycarboxylic acid receptor 3 (HCA3) and GPR84 share signaling via Gαi/o proteins and the agonist 3-hydroxydecanoic acid (3HDec). Both receptors are abundantly expressed in monocytes, macrophages and neutrophils but have opposing functions in these innate immune cells. Detailed insights into the molecular mechanisms and signaling components involved in immune cell regulation by GPR84 and HCA3 are still lacking. Here, we report that GPR84-mediated pro-inflammatory signaling depends on coupling to the hematopoietic cell-specific Gα15 protein in human macrophages, while HCA3 exclusively couples to Gαi protein. We show that activated GPR84 induces Gα15-dependent ERK activation, increases intracellular Ca2+ and IP3 levels as well as ROS production. In contrast, HCA3 activation shifts macrophage metabolism to a less glycolytic phenotype, which is associated with anti-inflammatory responses. This is supported by an increased release of anti-inflammatory IL-10 and a decreased secretion of pro-inflammatory IL-1β. In primary human neutrophils, stimulation with HCA3 agonists counteracts the GPR84-induced neutrophil activation. Our analyses reveal that 3HDec acts solely through GPR84 but not HCA3 activation in macrophages. In summary, this study shows that HCA3 mediates hyporesponsiveness in response to metabolites derived from dietary lactic acid bacteria and uncovers that GPR84, which is already targeted in clinical trials, promotes pro-inflammatory signaling via Gα15 protein in macrophages

    Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

    Get PDF
    Assistive robots can potentially improve the quality of life and personal independence of elderly people by supporting everyday life activities. To guarantee a safe and intuitive interaction between human and robot, human intentions need to be recognized automatically. As humans communicate their intentions multimodally, the use of multiple modalities for intention recognition may not just increase the robustness against failure of individual modalities but especially reduce the uncertainty about the intention to be recognized. This is desirable as particularly in direct interaction between robots and potentially vulnerable humans a minimal uncertainty about the situation as well as knowledge about this actual uncertainty is necessary. Thus, in contrast to existing methods, in this work a new approach for multimodal intention recognition is introduced that focuses on uncertainty reduction through classifier fusion. For the four considered modalities speech, gestures, gaze directions and scene objects individual intention classifiers are trained, all of which output a probability distribution over all possible intentions. By combining these output distributions using the Bayesian method Independent Opinion Pool [1] the uncertainty about the intention to be recognized can be decreased. The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm. The results show that fused classifiers, which combine multiple modalities, outperform the respective individual base classifiers with respect to increased accuracy, robustness, and reduced uncertainty
    • …
    corecore