170,543 research outputs found

    Toward a Human-Centered AI-assisted Colonoscopy System

    Full text link
    AI-assisted colonoscopy has received lots of attention in the last decade. Several randomised clinical trials in the previous two years showed exciting results of the improving detection rate of polyps. However, current commercial AI-assisted colonoscopy systems focus on providing visual assistance for detecting polyps during colonoscopy. There is a lack of understanding of the needs of gastroenterologists and the usability issues of these systems. This paper aims to introduce the recent development and deployment of commercial AI-assisted colonoscopy systems to the HCI community, identify gaps between the expectation of the clinicians and the capabilities of the commercial systems, and highlight some unique challenges in Australia.Comment: 9 page

    A Communicative Action Framework for Discourse Strategies for AI-based Systems: The MetTrains Application Case

    Get PDF
    Increasing attention is being paid to the challenges of how artificial intelligence (AI)-based systems offer explanations to users. Explanation capabilities developed for older logic-based systems still have relevance, but new thinking is needed in designing explanations and other discourse strategies for new forms of AI that include machine learning. In this work-in-progress paper we show how a communicative action design framework can be used to design an AI-based system’s interface to achieve desired goals. The applicability of the framework is demonstrated with an interface for an intelligent video surveillance system for reducing railway suicide. The communicative action framework is an important step in theory development for human-computer interaction with AI as used in the information systems domain

    Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Model-based AI Systems

    Full text link
    The release of ChatGPT, Bard, and other large language model (LLM)-based chatbots has drawn huge attention on foundations models worldwide. There is a growing trend that foundation models will serve as the fundamental building blocks for most of the future AI systems. However, incorporating foundation models in AI systems raises significant concerns about responsible AI due to their black box nature and rapidly advancing super-intelligence. Additionally, the foundation model's growing capabilities can eventually absorb the other components of AI systems, introducing the moving boundary and interface evolution challenges in architecture design. To address these challenges, this paper proposes a pattern-oriented responsible-AI-by-design reference architecture for designing foundation model-based AI systems. Specially, the paper first presents an architecture evolution of AI systems in the era of foundation models, from "foundation-model-as-a-connector" to "foundation-model-as-a-monolithic architecture". The paper then identifies the key design decision points and proposes a pattern-oriented reference architecture to provide reusable responsible-AI-by-design architectural solutions to address the new architecture evolution and responsible AI challenges. The patterns can be embedded as product features of foundation model-based AI systems and can enable organisations to capitalise on the potential of foundation models while minimising associated risks

    From Human-Centered to Social-Centered Artificial Intelligence: Assessing ChatGPT's Impact through Disruptive Events

    Full text link
    Large language models (LLMs) and dialogue agents have existed for years, but the release of recent GPT models has been a watershed moment for artificial intelligence (AI) research and society at large. Immediately recognized for its generative capabilities and versatility, ChatGPT's impressive proficiency across technical and creative domains led to its widespread adoption. While society grapples with the emerging cultural impacts of ChatGPT, critiques of ChatGPT's impact within the machine learning community have coalesced around its performance or other conventional Responsible AI evaluations relating to bias, toxicity, and 'hallucination.' We argue that these latter critiques draw heavily on a particular conceptualization of the 'human-centered' framework, which tends to cast atomized individuals as the key recipients of both the benefits and detriments of technology. In this article, we direct attention to another dimension of LLMs and dialogue agents' impact: their effect on social groups, institutions, and accompanying norms and practices. By illustrating ChatGPT's social impact through three disruptive events, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible implementation of AI systems. We hope this effort will call attention to more comprehensive and longitudinal evaluation tools and compel technologists to go beyond human-centered thinking and ground their efforts through social-centered AI

    AI in the newsroom: A data quality assessment framework for employing machine learning in journalistic workflows

    Full text link
    [EN] AI-driven journalism refers to various methods and tools for gathering, verifying, producing, and distributing news information. Their potential is to extend human capabilities and create new forms of augmented journalism. Although scholars agreed on the necessity to embed journalistic values in these systems to make AI-driven systems accountable, less attention is paid to data quality, while the results' accuracy and efficiency depend on high-quality data. However, data quality remains complex to define insofar as it is a multidimensional, highly domain-dependent concept. Assessing data quality in the context of AI-driven journalism requires a broader and interdisciplinary approach, relying on the challenges of data quality in machine learning and the ethical challenges of using machine learning in journalism. These considerations ground a conceptual data quality assessment framework that aims to support the collection and pre-processing stages in machine learning. It aims to contribute to strengthening data literacy in journalism and to make a bridge between scientific disciplines that should be viewed through the lenses of their complementarity.Dierickx, L.; Lindén, C.; Opdahl, A.; Khan, S.; Guerrero Rojas, D. (2023). AI in the newsroom: A data quality assessment framework for employing machine learning in journalistic workflows. Editorial Universitat Politècnica de València. 217-225. https://doi.org/10.4995/CARMA2023.2023.1644021722

    The Global Struggle Over AI Surveillance: Emerging Trends and Democratic Responses

    Get PDF
    From cameras that identify the faces of passersby to algorithms that keep tabs on public sentiment online, artificial intelligence (AI)-powered tools are opening new frontiers in state surveillance around the world. Law enforcement, national security, criminal justice, and border management organizations in every region are relying on these technologies—which use statistical pattern recognition, machine learning, and big data analytics—to monitor citizens.What are the governance implications of these enhanced surveillance capabilities?This report explores the challenge of safeguarding democratic principles and processes as AI technologies enable governments to collect, process, and integrate unprecedented quantities of data about the online and offline activities of individual citizens. Three complementary essays examine the spread of AI surveillance systems, their impact, and the transnational struggle to erect guardrails that uphold democratic values.In the lead essay, Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace, assesses the global spread of AI surveillance tools and ongoing efforts at the local, national, and multilateral levels to set rules for their design, deployment, and use. It gives particular attention to the dynamics in young or fragile democracies and hybrid regimes, where checks on surveillance powers may be weakened but civil society still has space to investigate and challenge surveillance deployments.Two case studies provide more granular depictions of how civil society can influence this norm-shaping process: In the first, Eduardo Ferreyra of Argentina's Asociación por los Derechos Civiles discusses strategies for overcoming common obstacles to research and debate on surveillance systems. In the second, Danilo Krivokapic of Serbia's SHARE Foundation describes how his organization drew national and global attention to the deployment of Huawei smart cameras in Belgrade

    Smart Machine Vision for Universal Spatial Mode Reconstruction

    Full text link
    Structured light beams, in particular those carrying orbital angular momentum (OAM), have gained a lot of attention due to their potential for enlarging the transmission capabilities of communication systems. However, the use of OAM-carrying light in communications faces two major problems, namely distortions introduced during propagation in disordered media, such as the atmosphere or optical fibers, and the large divergence that high-order OAM modes experience. While the use of non-orthogonal modes may offer a way to circumvent the divergence of high-order OAM fields, artificial intelligence (AI) algorithms have shown promise for solving the mode-distortion issue. Unfortunately, current AI-based algorithms make use of large-amount data-handling protocols that generally lead to large processing time and high power consumption. Here we show that a low-power, low-cost image sensor can itself act as an artificial neural network that simultaneously detects and reconstructs distorted OAM-carrying beams. We demonstrate the capabilities of our device by reconstructing (with a 95%\% efficiency) individual Vortex, Laguerre-Gaussian (LG) and Bessel modes, as well as hybrid (non-orthogonal) coherent superpositions of such modes. Our work provides a potentially useful basis for the development of low-power-consumption, light-based communication devices
    • …
    corecore