114 research outputs found

    Affect Recognition in Hand-Object Interaction Using Object-Sensed Tactile and Kinematic Data

    Get PDF
    We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes

    A Self for robots: core elements and ascription by humans

    Get PDF
    Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in order to create systems that have the possibility of dealing efficiently with the presence of novel situations and unforeseen inputs. Given the relational nature of human beings, with a glimpse into the future of assistive robots, it seems relevant to start thinking about the nature of the interaction with such robots, increasingly human-like not only from the outside but also in terms of behavior. The question posed in this abstract concerns the possibility of ascribing the robot not only a mind but a more profound dimension: a Self

    The impact of early aging on visual perception of space and time.

    Get PDF
    Visual perception of space and time has been shown to rely on context dependency, an inferential process by which the average magnitude of a series of stimuli previously experienced acts as a prior during perception. This article aims to investigate the presence and evolution of this phenomenon in early aging. Two groups of participants belonging to two different age ranges (Young Adults: average age 28.8 years old; Older Adults: average age 62.8 years old) participated in the study performing a discrimination and a reproduction task, both in a spatial and temporal conditions. In particular, they were asked to evaluate lengths in the spatial domain and interval durations in the temporal one. Early aging resulted to be associated to a general decline of the perceptual acuity, which is particularly evident in the temporal condition. The context dependency phenomenon was preserved also during aging, maintaining similar levels as those exhibited by the younger group in both space and time perception. However, the older group showed a greater variability in context dependency among participants, perhaps due to different strategies used to face a higher uncertainty in the perceptual process

    Affective Contagion: How Attitudes Expressed by Others Influence Our Perception of Actions

    Get PDF
    Vitality forms represent a fundamental aspect of social interactions by characterizing how actions are performed and how words are pronounced on the basis of the attitude of the agent. Same action, such as a handshake, may have a different impact on the receiver when it is performed kindly or vigorously, and similarly, a gentle or rude tone of voice may have a different impact on the listener. In the present study, we carried out two experiments that aimed to investigate whether and how vocal requests conveying different vitality forms can influence the perception of goal-directed actions and to measure the duration of this effect over time. More specifically, participants were asked to listen to the voice of an actor pronouncing “give me” in a rude or gentle way. Then, they were asked to observe the initial part of a rude or a gentle passing action, continue it mentally, and estimate the time of its completion. Results showed that the perception of different vitality forms expressed by vocal requests influenced the estimation of action duration. Moreover, we found that this effect was limited to a certain time interval (800 ms), after which it started to decay

    Effective and anatomical connectivity of the dorso-central insula during the processing of action forms

    Full text link
    In both human and monkeys the observation and execution of actions produced the activation of a network consisting of parietal and frontal areas. Although this network is involved in the encoding of the action goal, it does not consider the affective component of the action: vitality form (VF). Several studies showed that the observation and execution of actions conveying VFs selectively activated the dorso-central insula (DCI). In the present study, we aimed to clarify, by using Dynamic Causal Modeling (DCM), the direction of the information flow across DCI, parieto-frontal areas (PMv, IPL) and posterior superior temporal sulcus (pSTS) during both observation and execution of actions conveying VFs. Results indicate that, during observation, DCI receives the visual input from pSTS, and, in turn, sends it to the fronto-parietal network. Moreover, DCI significantly modulates PMv. Conversely, during execution, the motor input starts from PMv, reaches DCI and IPL, with a significant modulation from PMv to DCI. The reciprocal exchange of information between PMv and DCI suggests that these areas work closely together in the VFs action processing. An additional tractography analysis corroborates our DCM models, showing a correspondence between functional connections and anatomical tracts

    Infusion Micro-Pump Development Using MEMS Technology

    Get PDF
    International audienceDiabetes is a chronic condition that occurs when the pancreas does not produce enough insulin or when the body cannot effectively use the insulin it produces. People having type 1 diabetes require insulin (10% of all diabetics). People with type 2 diabetes can be treated with oral medication, but may also require insulin; 10% of all type 2 diabetics require insulin. Among the actual different methods to administer insulin (syringes, pens and conventional infusion pumps) a possibility to increase infuser performances is offered by the utilization of silicon based MEMS pumps (Micro- Electro Mechanical Systems). The main two pump families are classified as mechanical and non-mechanical pumps. The former contains check-valve, peristaltic, rectification without valves and rotary ones (“Displacement Pumps”) or Ultrasonic and Centrifugal (“Dynamic Pumps”); the latter consists in Pressure, Concentration, Electrical Potential gradients and Magnetic Potential micro-pumps. The micro-pump described here is an electro-mechanical device actuated with a piezoelectric-element and based on MEMS technology, able to minimize size and costs, offering a high precision pharmacological dispense. Three slices are bonded to reach the final results: top and bottom caps and an intermediate SOI. In case of anodic bonding, top and bottom caps are constituted of micromachined borophosphosilicate wafers, whereas in case of metallic bonding three silicon slices are used. The paper deals with the fabrication evolution of the device according to the different items that had to be faced during development: design, fluidic, mechanical and electrical simulations and characterization, safety requirements and final testing. Built-in reliability is ensured by two inner sensors able to detect any occlusion or malfunctioning and informing so the patient. The result is a compact, core pump chip that can deliver from 0.02 Units of insulin up to 3.6 Units per minute with accuracy better than 5%

    Spatiotemporal Coordination Supports a Sense of Commitment in Human-Robot Interaction

    Get PDF
    In the current study, we presented participants with videos in which a humanoid robot (iCub) and a human agent were tidying up by moving toys from a table into a container. In the High Coordination condition, the two agents worked together in a coordinated manner, with the human picking up the toys and passing them to the robot. In the Low Coordination condition, they worked in parallel without coordinating. Participants were asked to imagine themselves in the position of the human agent and to respond to a battery of questions to probe the extent to which they felt committed to the joint action. While we did not observe a main effect of our coordination manipulation, the results do reveal that participants who perceived a higher degree of coordination also indicated a greater sense of commitment to the joint action. Moreover, the results show that participants’ sensitivity to the coordination manipulation was contingent on their prior attitudes towards the robot: participants in the High Coordination condition reported a greater sense of commitment than participants in the Low Coordination condition, except among those participants who were a priori least inclined to experience a close sense of relationship with the robot
    • …
    corecore