3,434 research outputs found
A visualisation and simulation framework for local and remote HRI experimentation
In this text, we will present work on the design and development of a ROS-based (Robot Operating System) remote 3D visualisation, control and simulation framework. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project. The proposed solution was implemented using ROS, and designed to attend the needs of two user groups â local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS environment, including a faithful representation of a robotic platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real laboratory workspace; (2) a fully functional and intuitive user interface for monitoring and development; (3) a remote robotic laboratory that can connect remote users to the framework via a web browser. The proposed solution was thoroughly and systematically tested under operational conditions, so as to assess its qualities in terms of features, ease-of-use and performance. Finally, conclusions concerning the success and potential of this research and development effort are drawn, and the foundations for future work will be proposed
Recommended from our members
Attentional mechanisms for socially interactive robots â a survey
This review intends to provide an overview of the state of the art in the modeling and implementation of automatic attentional mechanisms for socially interactive robots. Humans assess and exhibit intentionality by resorting to multisensory processes that are deeply rooted within low-level automatic attention-related mechanisms of the brain. For robots to engage with humans properly, they should also be equipped with similar capabilities. Joint attention, the precursor of many fundamental types of social interactions, has been an important focus of research in the past decade and a half, therefore providing the perfect backdrop for assessing the current status of state-of-the-art automatic attentional-based solutions. Consequently, we propose to review the influence of these mechanisms in the context of social interaction in cutting-edge research work on joint attention. This will be achieved by summarizing the contributions already made in these matters in robotic cognitive systems research, by identifying the main scientific issues to be addressed by these contributions and analyzing how successful they have been in this respect, and by consequently drawing conclusions that may suggest a roadmap for future successful research efforts
Recommended from our members
Fast exact Bayesian inference for high-dimensional models
In this text, we present the principles that allow the tractable implementation of exact inference processes concerning a group of widespread classes of Bayesian generative models, which have until recently been deemed as intractable whenever formulated using high-dimensional joint distributions. We will demonstrate the usefulness of such a principled approach with an example of real-time OpenCL implementation using GPUs of a full-fledged, computer vision-based model to estimate gaze direction in human-robot interaction (HRI)
Recommended from our members
Multisensory 3D saliency for artficial attention systems
In this paper we present proof-of-concept for a novel solution consisting of a short-term 3D memory for artificial attention systems, loosely inspired in perceptual processes believed to be implemented in the human brain. Our solution supports the implementation of multisensory perception and stimulus-driven processes of attention. For this purpose, it provides (1) knowledge persistence with temporal coherence tackling potential salient regions outside the field of view, via a panoramic, log-spherical inference grid; (2) prediction, by using estimates of local 3D velocity to anticipate the effect of scene dynamics; (3) spatial correspondence between volumetric cells potentially occupied by proto-objects and their corresponding multisensory saliency scores. Visual and auditory signals are processed to extract features that are then filtered by a proto-object segmentation module that employs colour and depth as discriminatory traits. We consider as features, apart from the commonly used colour and intensity contrast, colour bias, the presence of faces, scene dynamics and also loud auditory sources. Combining conspicuity maps derived from these features we obtain a 2D saliency map, which is then processed using the probability of occupancy in the scene to construct the final 3D saliency map as an additional layer of the Bayesian Volumetric Map (BVM) inference grid
Recommended from our members
Gaze tracing in a bounded log-spherical space for artificial attention systems
Human gaze is one of the most important cue for social robotics due to its embedded intention information. Discovering the location or the object that an interlocutor is staring at, gives the machine some insight to perform the correct attentional behaviour. This work presents a fast voxel traversal algorithm for estimating the potential locations that a human is gazing. Given a 3D occupancy map in log-spherical coordinates and the gaze vector, we evaluate the regions that are relevant for attention by computing the set of intersected voxels between an arbitrary gaze ray in the 3D space and a log-spherical bounded section defined by Ďâ(Ďmin,Ďmax), θâ(θmin,θmax), Ďâ(Ďmin,Ďmax). The first intersected voxel is computed in closed form and the rest are obtained by binary search guaranteeing no repetitions in the intersected set. The proposed method is motivated and validated within a human-robot interaction application: gaze tracing for artificial attention systems
Recommended from our members
Evaluating the influence of automatic attentional mechanisms in human-robot interaction
The human ability of unconsciously attending to social signals, together with other even more primitive automatic attentional processes, has been argued in the literature to play an important part in social interaction. In this paper, we will argue that the evaluation of the influence of these unconscious perceptual processes in social interaction with robots has been addressed in previous research in many cases in an ad hoc fashion, while, on the contrary, it should be tackled systematically, bridging more conventional measures from robotics with criteria stemming from ideas used in human studies in psychology, neuroscience and social sciences. We will start by establishing an experimental canvas that will limit complexity to a sustainable level, while still fostering adaptive behaviour and variability in interaction. We will then present a brief assessment of the criteria used in the HRI literature to study this particular type of experiments in order to evaluate success, followed by a suggestion of adaptation of other criteria used in human studies, which has only been sporadically and non-systematically performed in HRI research â in most cases, more as expression of future intents. We will conclude by proposing a methodology for this evaluation, to be applied in the project âCoordinated Attention for Social Interaction with Robotsâ sponsored by the Portuguese Foundation for Science and Technology (FCT)
Touch attention Bayesian models for robotic active haptic exploration of heterogeneous surfaces
This work contributes to the development of active haptic exploration strategies of surfaces using robotic hands in environments with an unknown structure. The architecture of the proposed approach consists two main Bayesian models, implementing the touch attention mechanisms of the system. The model Ďper perceives and discriminates different categories of materials (haptic stimulus) integrating compliance and texture features extracted from haptic sensory data. The model Ďtar actively infers the next region of the workspace that should be explored by the robotic system, integrating the task information, the permanently updated saliency and uncertainty maps extracted from the perceived haptic stimulus map, as well as, inhibition-of-return mechanisms. The experimental results demonstrate that the Bayesian model Ďper can be used to discriminate 10 different classes of materials with an average recognition rate higher than 90%. The generalization capability of the proposed models was demonstrated experimentally. The ATLAS robot, in the simulation, was able to perform the following of a discontinuity between two regions made of different materials with a divergence smaller than 1cm (30 trials). The tests were performed in scenarios with 3 different configurations of the discontinuity. The Bayesian models have demonstrated the capability to manage the uncertainty about the structure of the surfaces and sensory noise to make correct motor decisions from haptic percepts
A Bayesian hierarchy for robust gaze estimation in humanârobot interaction
In this text, we present a probabilistic solution for robust gaze estimation in the context of humanârobot interaction. Gaze estimation, in the sense of continuously assessing gaze direction of an interlocutor so as to determine his/her focus of visual attention, is important in several important computer vision applications, such as the development of non-intrusive gaze-tracking equipment for psychophysical experiments in neuroscience, specialised telecommunication devices, video surveillance, humanâcomputer interfaces (HCI) and artificial cognitive systems for humanârobot interaction (HRI), our application of interest. We have developed a robust solution based on a probabilistic approach that inherently deals with the uncertainty of sensor models, but also and in particular with uncertainty arising from distance, incomplete data and scene dynamics. This solution comprises a hierarchical formulation in the form of a mixture model that loosely follows how geometrical cues provided by facial features are believed to be used by the human perceptual system for gaze estimation. A quantitative analysis of the proposed framework's performance was undertaken through a thorough set of experimental sessions. Results show that the framework performs according to the difficult requirements of HRI applications, namely by exhibiting correctness, robustness and adaptiveness
Recommended from our members
Brief survey on computational solutions for Bayesian inference
In this paper, we present a brief review of research work attempting to tackle the issue of tractability in Bayesian inference, including an analysis of the applicability and trade-offs of each proposed solution. In recent years, the Bayesian approach has become increasingly popular, endowing autonomous systems with the ability to deal with uncertainty and incompleteness. However, these systems are also expected to be efficient, while Bayesian inference in general is known to be an NP-hard problem, making it paramount to develop approaches dealing with this complexity in order to allow the implementation of usable Bayesian solutions. Novel computational paradigms and also major developments in massively parallel computation technologies, such as multi-core processors, GPUs and FPGAs, provide us with an inkling of the roadmap in Bayesian computation for upcoming years
Integration of touch attention mechanisms to improve the robotic haptic exploration of surfaces
This text presents the integration of touch attention mechanisms to improve the efficiency of the action-perception loop, typically involved in active haptic exploration tasks of surfaces by robotic hands. The progressive inference of regions of the workspace that should be probed by the robotic system uses information related with haptic saliency extracted from the perceived haptic stimulus map (exploitation) and a âcuriosityâ-inducing prioritisation based on the reconstruction's inherent uncertainty and inhibition-of-return mechanisms (exploration), modulated by top-down influences stemming from current task objectives, updated at each exploration iteration. This work also extends the scope of the top-down modulation of information presented in a previous work, by integrating in the decision process the influence of shape cues of the current exploration path. The Bayesian framework proposed in this work was tested in a simulation environment. A scenario made of three different materials was explored autonomously by a robotic system. The experimental results show that the system was able to perform three different haptic discontinuity following tasks with a good structural accuracy, demonstrating the selectivity and generalization capability of the attention mechanisms. These experiments confirmed the fundamental contribution of the haptic saliency cues to the success and accuracy of the execution of the tasks
- âŚ