5,544 research outputs found

    Requisite Variety in Ethical Utility Functions for AI Value Alignment

    Get PDF
    Being a complex subject of major importance in AI Safety research, value alignment has been studied from various perspectives in the last years. However, no final consensus on the design of ethical utility functions facilitating AI value alignment has been achieved yet. Given the urgency to identify systematic solutions, we postulate that it might be useful to start with the simple fact that for the utility function of an AI not to violate human ethical intuitions, it trivially has to be a model of these intuitions and reflect their variety − - whereby the most accurate models pertaining to human entities being biological organisms equipped with a brain constructing concepts like moral judgements, are scientific models. Thus, in order to better assess the variety of human morality, we perform a transdisciplinary analysis applying a security mindset to the issue and summarizing variety-relevant background knowledge from neuroscience and psychology. We complement this information by linking it to augmented utilitarianism as a suitable ethical framework. Based on that, we propose first practical guidelines for the design of approximate ethical goal functions that might better capture the variety of human moral judgements. Finally, we conclude and address future possible challenges.Comment: IJCAI 2019 AI Safety Worksho

    On Controllability of Artificial Intelligence

    Get PDF
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability

    Inclusive Artificial Intelligence

    Full text link
    Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions

    HUMAN-AI COLLABORATION IN ORGANISATIONS: A LITERATURE REVIEW ON ENABLING VALUE CREATION

    Get PDF
    The augmentation of human intellect and capability with artificial intelligence is integral to the advancement of next generation human-machine collaboration technologies designed to drive performance improvement and innovation. Yet we have limited understanding of how organisations can translate this potential into creating sustainable business value. We conduct an in-depth literature review of interdisciplinary research on the challenges and opportunities in organisational adoption of human-AI collaboration for value creation. We identify five positions central to how organisations can integrate and align the socio-technical challenges of augmented collaboration, namely strategic positioning, human engagement, organisational evolution, technology development and intelligence building. We synthesise the findings by means of an integrated model that focuses organisations on building the requisite internal microfoundations for the systematic management of augmented systems

    Moral Programming: Crafting a flexible heuristic moral meta-model for meaningful AI control in pluralistic societies

    Get PDF
    Artificial Intelligence (AI) permeates more and more application domains. Its progress regarding scale, speed, and scope magnifies potential societal benefits but also ethically and safety relevant risks. Hence, it becomes vital to seek a meaningful control of present-day AI systems (i.e. tools). For this purpose, one can aim at counterbalancing the increasing problem-solving ability of AI with boundary conditions core to human morality. However, a major problem is that morality exists in a context-sensitive steadily shifting explanatory sphere co-created by humans using natural language – which is inherently ambiguous at multiple levels and neither machine-understandable nor machine-readable. A related problem is what we call epistemic dizziness, a phenomenon linked to the inevitable circumstance that one could always be wrong. Yet, while universal doubt cannot be eliminated from morality, it need not be magnified if the potential/requirement for steady refinements is anticipated by design. Thereby, morality pertains to the set of norms and values enacted at the level of a society, other not nearer specified collectives of persons, or at the level of an individual. Norms are instrumental in attaining the fulfilment of values, the latter being an umbrella term for all that seems decisive for distinctions between right and wrong – a central object of study in ethics. In short, for a meaningful control of AI against the background of the changing contextsensitive and linguistically moulded nature of human morality, it is helpful to craft descriptive and thus sufficiently flexible AI-readable heuristic models of morality. In this way, the problem-solving ability of AI could be efficiently funnelled through these updatable models so as to ideally boost the benefits and mitigate the risks at the AI deployment stage with the conceivable side-effect of improving human moral conjectures. For this purpose, we introduced a novel transdisciplinary framework denoted augmented utilitarianism (AU) (Aliman and Kester, 2019b), which is formulated from a meta-ethical stance. AU attempts to support the human-centred task to harness human norms and values to explicitly and traceably steer AI before humans themselves get unwittingly and unintelligibly steered by the obscurity of AI’s deployment. Importantly, AU is descriptive, non-normative, and explanatory (Aliman, 2020), and is not to be confused with normative utilitarianism. (While normative ethics pertains to ‘what one ought to do’, descriptive ethics relates to empirical studies on human ethical decision-making.) This chapter offers the reader a compact overview of how AU coalesces elements from AI, moral psychology, cognitive and affective science, mathematics, systems engineering, cybernetics, and epistemology to craft a generic scaffold able to heuristically encode given moral frameworks in a machine-readable form. We thematise novel insights and also caveats linked to advanced AI risks yielding incentives for future work

    Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

    Get PDF
    In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research

    Readiness of IT organisations to implement Artificial Intelligence to support business processes in Gauteng Province, South Africa.

    Get PDF
    Artificial Intelligence (AI) has emerged as a research field and more particularly studies pertaining to the readiness of organisations to implement AI. Although AI implementation has proliferated across industries, many organisations still struggle to successfully achieve their business goals, associated with AI and Fourth Industrial Revolution (4IR). This study attempts to close this gap, by conducting a deductive case study, and thematic analysis into the readiness of a Gauteng-based IT organisation to implement AI towards achieving their business goals, in line with the benefits associated with 4IR. To achieve this, the researcher draws on the Technology-Organisation-Environment (TOE) framework to reflect on the dimensions and group them into strategy, perception and awareness, challenges, and organisational culture, related to contextual factors. This paper reports on the outcomes of open-ended interviews and focus group discussions involving 31 participants across IT management, senior-, and junior technical staff, about the enabling and hindering factors of AI readiness. The study further offers insights and a research agenda to support IT managers and staff to make informed decisions towards increasing their readiness to implement A

    EVALUATING ARTIFICIAL INTELLIGENCE FOR OPERATIONS IN THE INFORMATION ENVIRONMENT

    Get PDF
    Recent advances in artificial intelligence (AI) portend a future of accelerated information cycles and intensified technology diffusion. As AI applications become increasingly prevalent and complex, Special Operations Forces (SOF) face the challenge of discerning which tools most effectively address operational needs and generate an advantage in the information environment. Yet, SOF currently lack an end user–focused evaluation framework that could assist information practitioners in determining the operational value of an AI tool. This thesis proposes a practitioner’s evaluation framework (PEF) to address the question of how SOF should evaluate AI technologies to conduct operations in the information environment (OIE). The PEF evaluates AI technologies through the perspective of the information practitioner who is familiar with the mission, the operational requirements, and OIE processes but has limited to no technical knowledge of AI. The PEF consists of a four-phased approach—prepare, design, conduct, recommend—that assesses nine evaluation domains: mission/task alignment; data; system/model performance; user experience; sustainability; scalability; affordability; ethical, legal, and policy considerations; and vendor assessment. By evaluating AI through a more structured, methodical approach, the PEF enables SOF to identify, assess, and prioritize AI-enabled tools for OIE.Outstanding ThesisMajor, United States ArmyApproved for public release. Distribution is unlimited

    IS2020 A Competency Model for Undergraduate Programs in Information Systems: The Joint ACM/AIS IS2020 Task Force

    Get PDF
    The IS2020 report is the latest in a series of model curricula recommendations and guidelines for undergraduate degrees in Information Systems (IS). The report builds on the foundations developed in previous model curricula reports to develop a major revision of the model curriculum with the inclusion of significant new characteristics. Specifically, the IS2020 report does not directly prescribe a degree structure that targets a specific context or environment. Rather, the IS2020 report provides guidance regarding the core content of the curriculum that should be present but also provides flexibility to customize curricula according to local institutional needs
    • …
    corecore