7 research outputs found

    AI Loyalty: A New Paradigm for Aligning Stakeholder Interests

    Get PDF
    When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity

    AI loyalty: A New Paradigm for Aligning Stakeholder Interests

    Get PDF
    When we consult with a doctor, lawyer, or financial advisor, we generally assume that they are acting in our best interests. But what should we assume when it is an artificial intelligence (AI) system that is acting on our behalf? Early examples of AI assistants like Alexa, Siri, Google, and Cortana already serve as a key interface between consumers and information on the web, and users routinely rely upon AI-driven systems like these to take automated actions or provide information. Superficially, such systems may appear to be acting according to user interests. However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive - not to mention ethical - advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems

    The Joint Tactical Air Controller: cognitive modeling and augmented reality HMD design.

    Get PDF
    This paper describes the design and model based evaluation of DARSAD, an augmented reality head mounted display for the joint tactical air controller (JTAC), who manages and directs fire from air assets near the battlefield. Designs, based on 6 principles of attention, memory and information processing are produced for various phases of JTAC operations including target identification and airspace management. The different design candidates are evaluated and compared based on how they “scored” in adhering to model predictions, when those models were based on the above principles. Display designs, principles, models and the evaluation process are all described here
    corecore