410 research outputs found

    Philosophical Signposts for Artificial Moral Agent Frameworks

    Get PDF
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents

    Handouts don’t exist. Hustle or you don’t eat.

    Get PDF
    It is well established that AI has a bias problem; however, black-boxed machine learning systems render it difficult to even understand and visualize the nature and extent of the problem, let alone find solutions. This paper discusses an artistic research approach toward highlighting AI bias and explores the aesthetic potential of machine learning through a case study of an AI artwork called #RiseandGrind.The artist trained a recurrent neural network on a dataset extracted from Twitter hashtags (#Riseandgrind and #Hustle),which were selected to represent a specific filter bubble (embodied neoliberal precarity) in order to produce a biased AI that generates tweets for a Twitter bot. This paper unpacks how this artwork makes visible the processes of machine learning in a playful and poetic way. The work reveals how the original filter bias is consolidated, amplified, shaped, and ultimately codified through this machine learning process. The AI is found to reproduce a cohesive worldview that, while reflecting the original data bias, further amplifies that bias through a process of flattening

    Can AI become more ethical than humans?:A Cross-Paradigmatic Evaluation of the Question

    Get PDF

    Reframing superintelligence: comprehensive AI services as general intelligence

    Get PDF
    Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them: Today, we can consider how AI systems are produced (through the work of research and development), what they do (broadly, provide services by performing tasks), and what they will enable (including incremental yet potentially thorough automation of human tasks). Because tasks subject to automation include the tasks that comprise AI research and development, current trends in the field promise accelerating AI-enabled advances in AI technology itself, potentially lead- ing to asymptotically recursive improvement of AI technologies in distributed systems, a prospect that contrasts sharply with the vision of self-improvement internal to opaque, unitary agents. The trajectory of AI development thus points to the emergence of asymptotically comprehensive, superintelligent-level AI services that— crucially—can include the service of developing new services, both narrow and broad, guided by concrete human goals and informed by strong models of human (dis)approval. The concept of comprehensive AI services (CAIS) provides a model of flexible, general intelligence in which agents are a class of service-providing products, rather than a natural or necessary engine of progress in themselves. Ramifications of the CAIS model reframe not only prospects for an intelligence explosion and the nature of advanced machine intelligence, but also the relationship between goals and intelligence, the problem of harnessing advanced AI to broad, challenging problems, and fundamental considerations in AI safety and strategy. Perhaps surprisingly, strongly self-modifying agents lose their instrumental value even as their implementation becomes more accessible, while the likely context for the emergence of such agents becomes a world already in possession of general superintelligent-level capabilities. These prospective capabilities, in turn, engender novel risks and opportunities of their own. Further topics addressed in this work include the general architecture of systems with broad capabilities, the intersection between symbolic and neural systems, learning vs. competence in definitions of intelligence, tactical vs. strategic tasks in the context of human control, and estimates of the relative capacities of human brains vs. current digital systems

    Artificial Stupidity

    Get PDF
    Public debate about AI is dominated by Frankenstein Syndrome, the fear that AI will become superhuman and escape human control. Although superintelligence is certainly a possibility, the interest it excites can distract the public from a more imminent concern: the rise of Artificial Stupidity (AS). This article discusses the roots of Frankenstein Syndrome in Mary Shelley’s famous novel of 1818. It then provides a philosophical framework for analysing the stupidity of artificial agents, demonstrating that modern intelligent systems can be seen to suffer from ‘stupidity of judgement’. Finally it identifies an alternative literary tradition that exposes the perils and benefits of AS. In the writings of Edmund Spenser, Jonathan Swift and E.T.A. Hoffmann, ASs replace, enslave or delude their human users. More optimistically, Joseph Furphy and Laurence Sterne imagine ASs that can serve human intellect as maps or as pipes. These writers provide a strong counternarrative to the myths that currently drive the AI debate. They identify ways in which even stupid artificial agents can evade human control, for instance by appealing to stereotypes or distancing us from reality. And they underscore the continuing importance of the literary imagination in an increasingly automated society

    Dynamic Cognition Applied to Value Learning in Artificial Intelligence

    Get PDF
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted behaviors of intelligent agents, and at the same time specifying what we want such systems to do. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop our moral preferences simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic cognitive approach proposed by Dreyfus, whose phenomenological philosophy defends that the human experience of being-in-the-world cannot be represented by the symbolic or connectionist cognitive methods. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI
    • …
    corecore