21 research outputs found

    Texture recognition using force sensitive resistors

    Get PDF
    This paper presents the results of an experiment that inves- tigates the presence of cues in the signal generated by a low-cost force sensitive resistor (FSR) to recognise surface texture. The sensor is moved across the surface and the data is analysed to investigate the presence of any patterns. We show that the signal contain enough information to recognise at least one sample surface

    Agent-based autonomous systems and abstraction engines: Theory meets practice

    Get PDF
    We report on experiences in the development of hybrid autonomous systems where high-level decisions are made by a rational agent. This rational agent interacts with other sub-systems via an abstraction engine. We describe three systems we have developed using the EASS BDI agent programming language and framework which supports this architecture. As a result of these experiences we recommend changes to the theoretical operational semantics that underpins the EASS framework and present a fourth implementation using the new semantics

    Multi robot cooperative area coverage, case study: spraying

    Get PDF
    Area coverage is a well-known problem in multi robotic systems, and it is a typical requirement in various real-world applications. A common and popular approach in the robotic community is to use explicit forms of communication for task allocation and coordination. These approaches are susceptible to the loss of communication signal, and costly with high computational complexity. There are very few approaches which are focused on implicit forms of communication. In these approaches, robots rely only on their local information for task allocation and coordination. In this paper, a cooperative strategy is proposed by which a team of robots perform spraying a large field. The focus of this paper is to achieve task allocation and coordination using only the robots' local information. Keywords: Multi Robotic System, Cooperative Behaviour, Coopera- tive Area Coverag

    The MCAPL Framework including the Agent Infrastructure Layer and Agent Java Pathfinder

    Get PDF

    Experimental evaluation of a multi-modal user interface for a robotic service

    Get PDF
    This paper reports the experimental evaluation of a Multi- Modal User Interface (MMUI) designed to enhance the user experience in terms of service usability and to increase acceptability of assistive robot systems by elderly users. The MMUI system offers users two main modalities to send commands: they are a GUI, usually running on the tablet attached to the robot, and a SUI, with a wearable microphone on the user. The study involved fifteen participants, aged between 70 and 89 years old, who were invited to interact with a robotic platform customized for providing every-day care and services to the elderly. The experimental task for the participants was to order a meal from three different menus using any interaction modality they liked. Quantitative and qualitative data analyses demonstrate a positive evaluation by users and show that the multi-modal means of interaction can help to make elderly-robot interaction more flexible and natural

    Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions

    Get PDF
    © Springer International Publishing Switzerland 2016. Industries such as flexible manufacturing and home care will be transformed by the presence of robotic assistants. Assurance of safety and functional soundness for these robotic systems will require rigorous verification and validation. We propose testing in simulation using Coverage-Driven Verification (CDV) to guide the testing process in an automatic and systematic way. We use a two-tiered test generation approach, where abstract test sequences are computed first and then concretized (e.g., data and variables are instantiated), to reduce the complexity of the test generation problem. To demonstrate the effectiveness of our approach, we developed a testbench for robotic code, running in ROS-Gazebo, that implements an object handover as part of a humanrobot interaction (HRI) task. Tests are generated to stimulate the robot’s code in a realistic manner, through stimulating the human, environment, sensors, and actuators in simulation. We compare the merits of unconstrained, constrained and model-based test generation in achieving thorough exploration of the code under test, and interesting combinations of human-robot interactions. Our results show that CDV combined with systematic test generation achieves a very high degree of automation in simulation-based verification of control code for robots in HRI

    Deep learning systems for estimating visual attention in robot-assisted therapy of children with autism and intellectual disability

    Get PDF
    Recent studies suggest that some children with autism prefer robots as tutors for improving their social interaction and communication abilities which are impaired due to their disorder. Indeed, research has focused on developing a very promising form of intervention named Robot-Assisted Therapy. This area of intervention poses many challenges, including the necessary flexibility and adaptability to real unconstrained therapeutic settings, which are different from the constrained lab settings where most of the technology is typically tested. Among the most common impairments of children with autism and intellectual disability is social attention, which includes difficulties in establishing the correct visual focus of attention. This article presents an investigation on the use of novel deep learning neural network architectures for automatically estimating if the child is focusing their visual attention on the robot during a therapy session, which is an indicator of their engagement. To study the application, the authors gathered data from a clinical experiment in an unconstrained setting, which provided low-resolution videos recorded by the robot camera during the child–robot interaction. Two deep learning approaches are implemented in several variants and compared with a standard algorithm for face detection to verify the feasibility of estimating the status of the child directly from the robot sensors without relying on bulky external settings, which can distress the child with autism. One of the proposed approaches demonstrated a very high accuracy and it can be used for off-line continuous assessment during the therapy or for autonomously adapting the intervention in future robots with better computational capabilities
    corecore