330,466 research outputs found

    Accepting the Familiar: The Effect of Perceived Similarity with AI Agents on Intention to Use and the Mediating Effect of IT Identity

    Get PDF
    With the rise and integration of AI technologies within organizations, our understanding of the impact of this technology on individuals remains limited. Although the IS use literature provides important guidance for organization to increase employees’ willingness to work with new technology, the utilitarian view of prior IS use research limits its application considering the new evolving social interaction between humans and AI agents. We contribute to the IS use literature by implementing a social view to understand the impact of AI agents on an individual’s perception and behavior. By focusing on the main design dimensions of AI agents, we propose a framework that utilizes social psychology theories to explain the impact of those design dimensions on individuals. Specifically, we build on Similarity Attraction Theory to propose an AI similarity-continuance model that aims to explain how similarity with AI agents influence individuals’ IT identity and intention to continue working with it. Through an online brainstorming experiment, we found that similarity with AI agents indeed has a positive impact on IT identity and on the intention to continue working with the AI agent

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    The effect of shared investing strategy on trust in artificial intelligence

    Get PDF
    This study examined the determinants of trust in artificial intelligence (AI) in the area of asset management. Many studies of risk perception have found that value similarity determines trust in risk managers. Some studies have demonstrated that value similarity also influences trust in AI. AI is currently employed in a diverse range of domains, including asset management. However, little is known about the factors that influence trust in asset management-related AI. We developed an investment game and examined whether shared investing strategy with an AI advisor increased the participants’ trust in the AI. In this study, questionnaire data were analyzed (n=101), and it was revealed that shared investing strategy had no significant effect on the participants’ trust in AI. In addition, it had no effect on behavioral trust. Perceived ability had significantly positive effects on both subjective and behavioral trust. This paper also discusses the empirical implications of the findings
    • 

    corecore