6 research outputs found

    When humans using the IT artifact becomes IT using the human artifact

    Get PDF
    Following Demetis & Lee (2016) who showed how systems theorizing can be conducted on the basis of a few systems principles, in this conceptual paper, we apply these principles to theorize about the systemic character of technology and investigate the role reversal in the relationship between humans and technology. By applying systems-theoretical requirements outlined by Demetis & Lee, we examine conditions for the systemic character of technology and, based on our theoretical discussion, we argue that humans can now be considered artifacts shaped and used by the (system of) technology rather than vice versa. We argue that the role reversal has considerable implications for the field of information systems that has thus far focused only on the use of the IT artifact by humans. We illustrate these ideas with empirical material from a well-known case from the financial markets: the collapse (“Flash Crash”) of the Dow Jones Industrial Average

    When Humans Using the IT Artifact Becomes IT Using the Human Artifact

    Get PDF
    Following Lee & Demetis [20] who showed how systems theorizing can be conducted on the basis of a few systems principles, in this paper, we apply these principles to theorize about the systemic character of technology and investigate the role-reversal in the relationship between humans and technology. By applying systems-theoretical requirements outlined by Lee & Demetis, we examine conditions for the systemic character of technology and, based on our theoretical discussion, we argue that humans can now be considered artifacts shaped and used by the (system of) technology rather than vice versa. We argue that the role-reversal has considerable implications for the field of information systems that has thus far focused only on the use of the IT artifact by humans. We illustrate these ideas with empirical material from a well known case from the financial markets: the collapse (“Flash Crash”) of the Dow Jones Industrial Average

    The Extended Corporate Mind: When Corporations Use AI to Break the Law

    Get PDF

    Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions

    Get PDF
    Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which we term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems, and a possible solution space

    From High Frequency Trading to Self-Organizing Moral Machines

    No full text

    From high frequency trading to self-organizing moral machines

    No full text
    Technology is responsible for major systemic changes within the global financial sector in general and particularly in the trade in financial products. The global financial sector has already developed into a comprehensive network of mutually connected people and computers that are constantly evaluating and approving millions of transactions. Algorithms play a crucial role within this global financial network. An algorithm is in essence merely a set of instructions developed by one or more people with the intention of having these instructions performed by a machine such as a computer, a software robot or a physical robot in order to realize an ideal result. As part of a development in which we as human beings have ever higher expectations of algorithms and these algorithms become ever more autonomous in their actions, we cannot avoid including possibilities in these algorithms that enable ethical or more considerations. To develop this ethical or moral consideration, we need a kind of ethical framework which can be used for developing algorithms. With the development of such a framework we can start to think about what we as human beings consider to be moral action by machines within the financial sector based on such a framework
    corecore