27 research outputs found
Spatial Knowledge Representation
Non-intuitive styles of interaction between humans and mobile robots still constitute a major barrier to the wider application and acceptance of mobile robot technology. More natural interaction can only be achieved if ways are found of bridging the gap between the forms of spatial knowledge maintained by such robots and the forms of language used by humans to communicate such knowledge
Interaction Debugging: an Integral Approach to Analyze
Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction fosters the development of improved systems and the generation of new knowledge. In this paper, we present the interaction debugging approach, which is an integral way of analyzing human-robot interaction. It is based on the collection and analysis of data from robots and their environment. Considering the current state of robotic technology, often only audio and video are insufficient for detailed analysis of human-robot interaction. Therefore, in our analysis we combine multiple types of data including audio, video, sensor values, and intermediate variables. An important aspect of the interaction debugging approach is using a tool called Interaction Debugger to analyze data. By supporting user-friendly data presentation, annotation and navigation, Interaction Debugger enables finegrained analyses of human-robot interaction. The main goal of this paper is to address how an integral approach to the analysis of human-robot interaction can be adopted. This is demonstrated by three case studies
Robot Vitals and Robot Health: An Intuitive Approach to Quantifying and Communicating Predicted Robot Performance Degradation in Human-Robot Teams
In this work we introduce the concept of Robot Vitals and propose a framework for systematically quantifying the performance degradation experienced by a robot. A performance indicator or parameter can be called a Robot Vital if it can be consistently correlated with a robot's failure, faulty behaviour or malfunction. Robot Health can be quantified as the entropy of observing a set of vitals. Robot vitals and Robot health are intuitive ways to quantify a robot's ability to function autonomously. Robots programmed with multiple levels of autonomy (LOA) do not scale well when a human is in charge of regulating the LOAs. Artificial agents can use robot vitals to assist operators with LOA switches that fix field-repairable non-terminal performance degradation in mobile robots. Robot health can also be used to aid a tele-operator's judgement and promote explainability (e.g. via visual cues), thereby reducing operator workload while promoting trust and engagement with the system. In multi-robot systems, agents can use robot health to prioritise robots most in need of tele-operator attention. The vitals proposed in this paper are: rate of change of signal strength; sliding window average of difference between expected robot velocity and actual velocity; robot acceleration; rate of increase in area coverage and localisation error
Benchmark movement data set for trust assessment in human robot collaboration
In the Drapebot project, a worker is supposed to collaborate with a large industrial manipulator in two tasks: collaborative transport of carbon fibre patches and collaborative draping. To realize data-driven trust assessement, the worker is equipped with a motion tracking suit and the body movement data is labeled with the trust scores from a standard Trust questionnaire
Benchmark EEG data set for trust assessment for interactions with social robots
The data collection consisted of a game interaction with a small humanoid EZ-robot. The robot explains a word to the participant either through movements depicting the concept or by verbal description. Depending on their performance, participants could "earn" or loose candy as remuneration for their participation. The dataset comprises EEG (Electroencephalography) recordings from 21 participants, gathered using Emotiv headsets. Each participant's EEG data includes timestamps and measurements from 14 sensors placed across different regions of the scalp. The sensor labels in the header are as follows: EEG.AF3, EEG.F7, EEG.F3, EEG.FC5, EEG.T7, EEG.P7, EEG.O1, EEG.O2, EEG.P8, EEG.T8, EEG.FC6, EEG.F4, EEG.F8, EEG.AF4, and Time
A framework for interactive human–robot design exploration
This study seeks to identify key aspects for increased integration of interactive robotics within the creative design process. Through its character as foundational research, the study aims to contribute to the advancement of new explorative design methods to support architects in their exploration of fabrication and assembly of an integrated performance-driven architecture. The article describes and investigates a proposed design framework for supporting an interactive human–robot design process. The proposed framework is examined through a 3-week architectural studio, with university master students exploring the design of a brick construction with the support of an interactive robotic platform. Evaluation of the proposed framework was done by triangulation of the authors’ qualitative user observations, quantitative logging of the students’ individual design processes, and through questionnaires completed after finishing the studies. The result suggests that interactive human–robot fabrication is a relevant mode of design with positive effect on the process of creative design exploration
AAU RainSnow Traffic Surveillance Dataset
Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects.Rain, Snow, and Bad Weather in Traffic Surveillance Computed vision-based image analysis lays the foundation for automatic traffic surveillance. This works well in daylight when the road users are clearly visible to the camera but often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze, and fog. In this dataset, we have focused on collecting traffic surveillance video in rainfall and snowfall, capturing 22 five-minute videos from seven different traffic intersections. The illumination of the scenes vary from broad daylight to twilight and night. The scenes feature glare from headlights of cars, reflections from puddles, and blur from raindrops at the camera lens. We have collected the data using a conventional RGB colour camera and a thermal infrared camera. If combined, these modalities should enable robust detection and classification of road users even under challenging weather conditions. 100 frames have been selected randomly from each five-minute sequence and any road user in these frames is annotated on a per-pixel, instance-level with corresponding category label. In total, 2,200 frames are annotated, containing 13,297 objects
Report_Questionnaire_for_Architectural_Studio – Supplemental material for A framework for interactive human–robot design exploration
Supplemental material, Report_Questionnaire_for_Architectural_Studio for A framework for interactive human–robot design exploration by Mads Brath Jensen, Isak Worre Foged and Hans Jørgen Andersen in International Journal of Architectural Computin
Sewer-ML
Sewer-ML is a sewer defect dataset. It contains 1.3 million images, from 75,618 videos collected from three Danish water utility companies over nine years. All videos have been annotated by licensed sewer inspectors following the Danish sewer inspection standard, Fotomanualen. This leads to consistent and reliable annotations, and a total of 17 annotated defect classes
