1 research outputs found

    A Hybrid Model for Situation Monitoring and Conflict Prediction in Human Supervised “Autonomous ” Systems

    No full text
    The paper focuses on a key issue for human supervised “autonomous” systems, namely situation monitoring. The operator’s involvement within human-robot team is first described as the way they close the control and decision loops. Then a framework based on particle filtering and Petri nets is presented for hybrid numerical-symbolic situation monitoring and inconsistency and conflict prediction within the team. Human-robot team and the operator’s involvement Autonomy is not an end in itself in robotic systems. Autonomy is needed because we want robots to be able to cope with mission hazards when communications with the human operator are impossible (due to communication gaps, discretion requirements, or to the operator’s workload). Therefore adjustable autonomy must be considered to enable the robots to compensate for the operator’s neglect (Goodrich et al. 2001). The control of shared autonomy may be humaninitiated, a priori scripted or robot-initiated (Brookshire, Singh, & Simmons 2004). Whatever the case, implementing adjustable autonomy requires situation awareness (Endsley 2000) – including predicting what is likely to happen next – both from the operator’s and robot’s points of view (Drury, Scholtz, & Yanco 2003). A functional architecture that is worth considering when dealing with autonomy and operator’s roles within a humanrobot team is the double loop (Barrouil 1993), which parallels the symbolic decision loop (situation monitoring and replanning) with the classical numerical loop (estimation and control)- see Fig. 1. Many papers have suggested autonomy levels for robots (Huang et al. 2004), human-agent teamwork (Bradshaw et al. 2003), UAVs 1 (Clough 2002) and others have focused on the operator’s roles (Yanco & Drury 2002; Scholtz 2003). What we are suggesting here is that the operator’s involvement can be regarded as the way they “close ” the loops (Fong, Thorpe, & Baur 2003). Let us distinguish three main autonomy levels for a single robot or UAV agent
    corecore