1 research outputs found

    Trustworthiness of autonomous systems

    Get PDF
    Effective robots and autonomous systems must be trustworthy. This chapter examines models of trustworthiness from a philosophical and empirical perspective to inform the design and adoption of autonomous systems. Trustworthiness is a property of trusted agents or organisations that engenders trust in other agent or organisations. Trust is a complex phenomena defined differently depending on the discipline. This chapter aims to bring different approaches under a single framework for investigation with three sorts of questions: Who or what is trustworthy?–metaphysics. How do we know who or what is trustworthy?–epistemology. What factors influence what or who should we trust?–normativity. A two-component model of trust is used that incorporates competence (skills, reliability and experience) and integrity (motives, honesty and character). It is supposed that human levels of competence yield the highest trust whereas trust is reduced at sub-human and super-human levels. The threshold for trustworthiness of an agent or organisation in a particular context is a function of their relationship with the truster and potential impacts of decisions. Building trustworthy autonomous systems requires obeying the norms of logic, rationality and ethics under pragmatic constraints–even though there is disagreement on these principles by experts. Autonomous systems may need sophisticated social identities including empathy and reputational concerns to build human-like trust relationships. Ultimately transdisciplinary research drawing on metaphysical, epistemological and normative human and machine theories of trust are needed to design trustworthy autonomous systems for adoption
    corecore