2,211 research outputs found

    From Human-Centered to Social-Centered Artificial Intelligence: Assessing ChatGPT's Impact through Disruptive Events

    Full text link
    Large language models (LLMs) and dialogue agents have existed for years, but the release of recent GPT models has been a watershed moment for artificial intelligence (AI) research and society at large. Immediately recognized for its generative capabilities and versatility, ChatGPT's impressive proficiency across technical and creative domains led to its widespread adoption. While society grapples with the emerging cultural impacts of ChatGPT, critiques of ChatGPT's impact within the machine learning community have coalesced around its performance or other conventional Responsible AI evaluations relating to bias, toxicity, and 'hallucination.' We argue that these latter critiques draw heavily on a particular conceptualization of the 'human-centered' framework, which tends to cast atomized individuals as the key recipients of both the benefits and detriments of technology. In this article, we direct attention to another dimension of LLMs and dialogue agents' impact: their effect on social groups, institutions, and accompanying norms and practices. By illustrating ChatGPT's social impact through three disruptive events, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible implementation of AI systems. We hope this effort will call attention to more comprehensive and longitudinal evaluation tools and compel technologists to go beyond human-centered thinking and ground their efforts through social-centered AI

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Agents and Robots for Reliable Engineered Autonomy

    Get PDF
    This book contains the contributions of the Special Issue entitled "Agents and Robots for Reliable Engineered Autonomy". The Special Issue was based on the successful first edition of the "Workshop on Agents and Robots for reliable Engineered Autonomy" (AREA 2020), co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020). The aim was to bring together researchers from autonomous agents, as well as software engineering and robotics communities, as combining knowledge from these three research areas may lead to innovative approaches that solve complex problems related to the verification and validation of autonomous robotic systems

    Hybrid Cloud Model Checking Using the Interaction Layer of HARMS for Ambient Intelligent Systems

    Get PDF
    Soon, humans will be co-living and taking advantage of the help of multi-agent systems in a broader way than the present. Such systems will involve machines or devices of any variety, including robots. These kind of solutions will adapt to the special needs of each individual. However, to the concern of this research effort, systems like the ones mentioned above might encounter situations that will not be seen before execution time. It is understood that there are two possible outcomes that could materialize; either keep working without corrective measures, which could lead to an entirely different end or completely stop working. Both results should be avoided, specially in cases where the end user will depend on a high level guidance provided by the system, such as in ambient intelligence applications. This dissertation worked towards two specific goals. First, to assure that the system will always work, independently of which of the agents performs the different tasks needed to accomplish a bigger objective. Second, to provide initial steps towards autonomous survivable systems which can change their future actions in order to achieve the original final goals. Therefore, the use of the third layer of the HARMS model was proposed to insure the indistinguishability of the actors accomplishing each task and sub-task without regard of the intrinsic complexity of the activity. Additionally, a framework was proposed using model checking methodology during run-time for providing possible solutions to issues encountered in execution time, as a part of the survivability feature of the systems final goals

    Flexible autonomy and context in human-agent collectives

    Get PDF
    Human-agent collectives (HACs) are collaborative relationships between humans and software agents that are formed to meet the individual and collective goals of their members. In general, different members of a HAC should have differing degrees of autonomy in determining how a goal is to be achieved, and the degree of autonomy that should be enjoyed by each member of the collective varies with context. This thesis explores how norms can be used to achieve context sensitive flexible autonomy in HACs. Norms can be viewed as defining standards of ideal behaviour. In the form of rules and codes, they are widely used to coordinate and regulate activity in human organisations, and more recently they have also been proposed as a coordination mechanism for multi-agent systems (MAS). Norms therefore have the potential to form a common framework for coordination and control in HACs. The thesis develops a novel framework in which group and individual norms are used to specify both the goal to be achieved by a HAC and the degree of autonomy of the HAC and/or of its members in achieving a goal. The framework allows members of a collective to create norms specifying how a goal should (or should not) be achieved, together with sanctions for non-compliance. These norms form part of the decision making context of both the humans and agents in the collective. A prototype implementation of the framework was evaluated using the Colored Trails test-bed in a scenario involving mixed human-agent teams. The experiments confirmed that norms can be used for coordination of HACs and to facilitate context related flexible autonomy

    The Cord Weekly (May 26, 1988)

    Get PDF

    Simulating Open Source Software Communities Through Collective Games

    Get PDF
    According to the Open Source Initiative, Open Source Software (OSS) can be defined by ten criteria. The most important and relevant ones are the free redistribution of the software, the inclusion of the source code and the authorization to modify and redistribute the work. OSS products are a vital part of how we understand Internet. But, for most people, it is still complicated to understand what is an Open Source Software Community. In this thesis, we have analysed how these OSS communities work, how they are structured and how they get the results that made them popular. Furthermore, a tool that simulates many of the features of OSS communities has been implemented. This platform permits a user to feel how is joining one of these communities and working with other community members to solve a complex problem through collaboration. This thesis has allowed us to remark the importance of collective games in simulating the dynamics of OSS communities. These communities are formed by members who have to come together to develop a product. Thus, the notion of collaboration is essential; as in the collaborative games where the players have to cooperate to reach a solution. This project also helps us illustrate the collective games approach through the Sudoku game, which is the game chosen to develop the simulation platform. To perform it, we have used intelligent agents which roles are to act like members of a real community. The result is that a human user can join it and play in different roles to understand the operation of OSS communities

    Sociotechnical Envelopment of Artificial Intelligence: An Approach to Organizational Deployment of Inscrutable Artificial Intelligence Systems

    Get PDF
    The paper presents an approach for implementing inscrutable (i.e., nonexplainable) artificial intelligence (AI) such as neural networks in an accountable and safe manner in organizational settings. Drawing on an exploratory case study and the recently proposed concept of envelopment, it describes a case of an organization successfully “enveloping” its AI solutions to balance the performance benefits of flexible AI models with the risks that inscrutable models can entail. The authors present several envelopment methods—establishing clear boundaries within which the AI is to interact with its surroundings, choosing and curating the training data well, and appropriately managing input and output sources—alongside their influence on the choice of AI models within the organization. This work makes two key contributions: It introduces the concept of sociotechnical envelopment by demonstrating the ways in which an organization’s successful AI envelopment depends on the interaction of social and technical factors, thus extending the literature’s focus beyond mere technical issues. Secondly, the empirical examples illustrate how operationalizing a sociotechnical envelopment enables an organization to manage the trade-off between low explainability and high performance presented by inscrutable models. These contributions pave the way for more responsible, accountable AI implementations in organizations, whereby humans can gain better control of even inscrutable machine-learning models

    Rethinking data and rebalancing digital power

    Get PDF
    This report highlights and contextualises four cross-cutting interventions with a strong potential to reshape the digital ecosystem: 1. Transforming infrastructure into open and interoperable ecosystems. 2. Reclaiming control of data from dominant companies. 3. Rebalancing the centres of power with new (non-commercial) institutions. 4. Ensuring public participation as an essential component of technology policymaking. The interventions are multidisciplinary and they integrate legal, technological, market and governance solutions. They offer a path towards addressing present digital challenges and the possibility for a new, healthy digital ecosystem to emerge. What do we mean by a healthy digital ecosystem? One that privileges people over profit, communities over corporations, society over shareholders. And, most importantly, one where power is not held by a few large corporations, but is distributed among different and diverse models, alongside people who are represented in, and affected by the data used by those new models. The digital ecosystem we propose is balanced, accountable and sustainable, and imagines new types of infrastructure, new institutions and new governance models that can make data work for people and society. Some of these interventions can be located within (or built from) emerging and recently adopted policy initiatives, while others require the wholesale overhaul of regulatory regimes and markets. They are designed to spark ideas that political thinkers, forward-looking policymakers, researchers, civil society organisations, funders and ethical innovators in the private sector consider and respond to when designing future regulations, policies or initiatives around data use and governance. This report also acknowledges the need to prepare the ground for the more ambitious transformation of power relations in the digital ecosystem. Even a well-targeted intervention won't change the system unless it is supported by relevant institutions and behavioural change
    • …
    corecore