26 research outputs found

    Predicting the Effect of Surface Texture on the Qualitative Form of Prehension

    Get PDF
    Reach-to-grasp movements change quantitatively in a lawful (i.e. predictable) manner with changes in object properties. We explored whether altering object texture would produce qualitative changes in the form of the precontact movement patterns. Twelve participants reached to lift objects from a tabletop. Nine objects were produced, each with one of three grip surface textures (high-friction, medium-friction and low-friction) and one of three widths (50 mm, 70 mm and 90 mm). Each object was placed at three distances (100 mm, 300 mm and 500 mm), representing a total of 27 trial conditions. We observed two distinct movement patterns across all trials—participants either: (i) brought their arm to a stop, secured the object and lifted it from the tabletop; or (ii) grasped the object ‘on-the-fly’, so it was secured in the hand while the arm was moving. A majority of grasps were on-the-fly when the texture was high-friction and none when the object was low-friction, with medium-friction producing an intermediate proportion. Previous research has shown that the probability of on-the-fly behaviour is a function of grasp surface accuracy constraints. A finger friction rig was used to calculate the coefficients of friction for the objects and these calculations showed that the area available for a stable grasp (the ‘functional grasp surface size’) increased with surface friction coefficient. Thus, knowledge of functional grasp surface size is required to predict the probability of observing a given qualitative form of grasping in human prehensile behaviour

    Act In case of Depression: The evaluation of a care program to improve the detection and treatment of depression in nursing homes. Study Protocol

    Get PDF
    Contains fulltext : 95616.pdf (publisher's version ) (Open Access)BACKGROUND: The aim of this study is evaluating the (cost-) effectiveness of a multidisciplinary, evidence based care program to improve the management of depression in nursing home residents of somatic and dementia special care units. The care program is an evidence based standardization of the management of depression, including standardized use of measurement instruments and diagnostical methods, and protocolized psychosocial, psychological and pharmacological treatment. METHODS/DESIGN: In a 19-month longitudinal controlled study using a stepped wedge design, 14 somatic and 14 dementia special care units will implement the care program. All residents who give informed consent on the participating units will be included. Primary outcomes are the frequency of depression on the units and quality of life of residents on the units. The effect of the care program will be estimated using multilevel regression analysis. Secondary outcomes include accuracy of depression-detection in usual care, prevalence of depression-diagnosis in the intervention group, and response to treatment of depressed residents. An economic evaluation from a health care perspective will also be carried out. DISCUSSION: The care program is expected to be effective in reducing the frequency of depression and in increasing the quality of life of residents. The study will further provide insight in the cost-effectiveness of the care program. TRIAL REGISTRATION: Netherlands Trial Register (NTR): NTR1477

    Interview : robots come of age

    No full text
    Key medical advances along with secure and stable societies means people have longer and more active lives. In Europe, fertility rates are on the decline. Yet life expectancy is increasing two and half years for each passing decade and nearly 25 % of the population will be over 65 by 2020. Because of it, the European Commission is funding research projects to address the socioeconomic challenges of an ageing population

    Investigating the effect of a humanoid robot’s head position on imitating human emotions

    No full text
    Humans show their emotions with facial expressions. In this paper, we investigate the effect of a humanoid robot’s head position on imitating human emotions. In an Internet survey through animation, we asked participants to adjust the head position of a robot to express six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. We found that humans expect a robot to look straight down when it is angry or sad, to look straight up when it is surprised or happy, and to look down and to its right when it is afraid. We also found that when a robot is disgusted some humans expect it to look straight to its right and some expect it to look down and to its left. We found that humans expect the robot to use an averted head position for all six emotions. In contrast, other studies have shown approach-oriented (anger and joy) emotions being attributed to direct gaze and avoidance-oriented emotions (fear and sadness) being attributed to averted gaze

    Does a friendly robot make you feel better?

    No full text
    \u3cp\u3eAs robots are taking a more prominent role in our daily lives, it becomes increasingly important to consider how their presence influences us. Several studies have investigated effects of robot behavior on the extent to which that robot is positively evaluated. Likewise, studies have shown that the emotions a robot shows tend to be contagious: a happy robot makes us feel happy as well. It is unknown, however, whether the affect that people experience while interacting with a robot also influences their evaluation of the robot. This study aims to discover whether people's affective and evaluative responses to a social robot are related. Results show that affective responses and evaluations are related, and that these effects are strongest when a robot shows meaningful motions. These results are consistent with earlier findings in terms of how people evaluate social robots.\u3c/p\u3

    Motions of robots matter!:The Social effects of idle and meaningful motions

    No full text
    Humans always move, even when “doing” nothing, but robots typically remain immobile. According to the threshold model of social influence [3] people respond socially on the basis of social verification. If applied to human-robot interaction this model would predict that people increase their social responses depending on the social verification of the robot. On other hand, the media equation hypothesis [11] holds that people will automatically respond socially when interacting with artificial agents. In our study a simple joint task was used to expose our participants to different levels of social verification. Low social verification was portrayed using idle motions and high social verification was portrayed using meaningful motions. Our results indicate that social responses increase with the level of social verification in line with the threshold model of social influence

    Stopping distance for a robot approaching two conversating persons

    No full text
    \u3cp\u3eIn recent years, much attention has been given to developing robots with various social skills. An important social skill is navigation in the presence of people. Earlier research has indicated preferred approach angles and stopping distances for a robot when approaching people who are interacting with each other. However, an experimental validation of user experiences with such a robot is largely missing. The current study investigates the shape and size of a shared interaction space and evaluations of a robot approaching from various angles. Results show an expected pattern of stopping distances, but only when a robot approaches the middle point between two persons. Additionally, more positive evaluations were found when a robot approached on the side of the participant compared to other participant's side. These findings highlight the importance of using a smart path planning method for robots when joining an interaction between users.\u3c/p\u3

    Turn-yielding cues in robot-human conversation

    Get PDF
    \u3cp\u3eIf robots are to communicate with humans in a successful manner, they will need to be able to take and give turns during conversations. Effective and appropriate turn-taking and turn-yielding actions are crucial in doing so. The present study investigates the objective and subjective performance of four different turn-yielding cues performed by a NAO robot. The results show that an artificial cue, flashing eye-LEDs, lead to significantly shorter response times by the conversational partner than not giving any cue and was experienced as an improvement to the conversation. However, stopping arm movement or head turning cues showed, respectively, no significant difference or even longer response times compared to the baseline condition. Conclusions are that turn-yielding cues can lead to improved conversations, though it depends on the type of cue, and that copying human turn-yielding cues is not necessarily the best option for robots.\u3c/p\u3

    Comfortable passing distances for robots

    No full text
    If autonomous robots are expected to operate in close proximity with people, they should be able to deal with human proxemics and social rules. Earlier research has shown that robots should respect personal space when approaching people, although the quantitative details vary with robot model and direction of approach. It would seem that similar considerations apply when a robot is only passing by, but direct measurement of the comfort of the passing distance is still missing. Therefore the current study measured the perceived comfort of varying passing distances of the robot on each side of a person in a corridor. It was expected that comfort would increase with distance until an optimum was reached, and that people would prefer a left passage over a right passage. Results showed that the level of comfort did increase with distance up to about 80 cm, but after that it remained constant. There was no optimal distance. Surprisingly, the side of passage had no effect on perceived comfort. These findings show that robot proxemics for passing by differ from approaching a person. The implications for modelling human-aware navigation and personal space models are discussed
    corecore