1,408 research outputs found

    Kidney function in the very elderly with hypertension: data from the hypertension in the very elderly (HYVET) trial.

    No full text
    BACKGROUND: numerous reports have linked impaired kidney function to a higher risk of cardiovascular events and mortality. There are relatively few data relating to kidney function in the very elderly. METHODS: the Hypertension in the Very Elderly Trial (HYVET) was a randomised placebo-controlled trial of indapamide slow release 1.5mg ± perindopril 2-4 mg in those aged ≥80 years with sitting systolic blood pressures of ≥160 mmHg and diastolic pressures of <110 mmHg. Kidney function was a secondary outcome. RESULTS: HYVET recruited 3,845 participants. The mean baseline estimated glomerular filtration rate (eGFR) was 61.7 ml/min/1.73 m(2). When categories of the eGFR were examined, there was a possible U-shaped relationship between eGFR, total mortality, cardiovascular mortality and events. The nadir of the U was the eGFR category ≥60 and <75 ml/min/1.73 m(2). Using this as a comparator, the U shape was clearest for cardiovascular mortality with the eGFR <45 ml/min/1.73 m(2) and ≥75 ml/min/1.73 m(2) showing hazard ratios of 1.88 (95% CI: 1.2-2.96) and 1.36 (0.94-1.98) by comparison. Proteinuria at baseline was also associated with an increased risk of later heart failure events and mortality. CONCLUSIONS: although these results should be interpreted with caution, it may be that in very elderly individuals with hypertension both low and high eGFR indicate increased risk

    Human-Robot Handshaking: A Review

    Full text link
    For some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.Comment: Pre-print version. Accepted for publication in the International Journal of Social Robotic

    Letter from Ruth Peters Mitchell, Brooklyn, New York, to Anne Whitney, Boston, Massachusetts, 1906 May 23

    Get PDF
    https://repository.wellesley.edu/whitney_correspondence/2761/thumbnail.jp

    MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

    Full text link
    Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI). Learning from Demonstration (LfD) methods from Human-Human Interactions (HHI) have shown promising results, especially when coupled with representation learning techniques. However, such methods for learning HRI either do not scale well to high dimensional data or cannot accurately adapt to changing via-poses of the interacting partner. We propose Multimodal Interactive Latent Dynamics (MILD), a method that couples deep representation learning and probabilistic machine learning to address the problem of two-party physical HRIs. We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE). Our experimental evaluations for learning HRI from HHI demonstrations show that MILD effectively captures the multimodality in the latent representations of HRI tasks, allowing us to decode the varying dynamics occurring in such tasks. Compared to related work, MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory. Notably, MILD can learn directly from camera-based pose estimations to generate trajectories, which we then map to a humanoid robot without the need for any additional training.Comment: Accepted at the IEEE-RAS International Conference on Humanoid Robots (Humanoids) 202
    • …
    corecore