4 research outputs found
Recommended from our members
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot development environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams
Attitude perception of an unmanned ground vehicle using an attitude haptic feedback device
In order to safely teleoperate an unmanned ground vehicle (UGV) through rough terrain, a human operator needs to be aware of its attitude. This awareness ensures (s)he can avoid rolling or tipping over the UGV, due to steep slopes or terrain depressions. Yet, it has been challenging to develop teleoperation systems that can provide attitude awareness, to human operators. So far, all research has been focused in implementing solutions through visual modality. We take a different approach, using haptic feedback to transmit an UGV's attitude to an human operator. Our novel attitude haptic feedback device (AHFD) provides information about the UGV's roll and pitch, and their direction of rotation, thorugh the use of upper limb proprioception. We also discuss a preliminary user study to understand the influence two different AHFD configurations (natural and ergonomic) have on attitude perception. Our results indicate there is no difference between the two AHFD configuration in judging attitude states and direction of rotations. Yet, natural configuration is perceived as causing higher physical strain and demand, while the ergonomic a higher overall mental effort. We also found participants had more difficulty in judging pitch attitude at higher angles.info:eu-repo/semantics/acceptedVersio
Flexible robotic control via co-operation between an operator and an ai-based control system
This thesis addresses the problem of variable autonomy in teleoperated mobile robots. Variable autonomy refers to the approach of incorporating several different levels of autonomous capabilities (Level(s) of Autonomy (LOA)) ranging from pure teleoperation (human has complete control of the robot) to full autonomy (robot has control of every capability), within a single robot. Most robots used for demanding and safety critical tasks (e.g. search and rescue, hazardous environments inspection), are currently teleoperated in simple ways, but could soon start to benefit from variable autonomy. The use of variable autonomy would allow Artificial Intelligence (AI) control algorithms to autonomously take control of certain functions when the human operator is suffering a high workload, high cognitive load, anxiety, or other distractions and stresses. In contrast, some circumstances may still necessitate direct human control of the robot. More specifically, this thesis is focused on investigating the issues of dynamically changing LOA (i.e. during task execution) using either Human-Initiative (HI) orMixed-Initiative (MI) control. MI refers to the peer-to-peer relationship between the robot and the operator in terms of the authority to initiate actions and LOA switches. HI refers to the human operators switching LOA based on their judgment, with the robot having no capacity to initiate LOA switches. A HI and a novel expert-guided MI controller are presented in this thesis. These controllers were evaluated using a multidisciplinary systematic experimental framework, that combines quantifiable and repeatable performance degradation factors for both the robot and the operator. The thesis presents statistically validated evidence that variable autonomy, in the form of HI and MI, provides advantages compared to only using teleoperation or only using autonomy, in various scenarios. Lastly, analyses of the interactions between the operators and the variable autonomy systems are reported. These analyses highlight the importance of personality traits and preferences, trust in the system, and the understanding of the system by the human operator, in the context of HRI with the proposed controllers