39 research outputs found
Accounting for the Neglected Dimensions of AI Progress
We analyze and reframe AI progress. In addition to the prevailing metrics of
performance, we highlight the usually neglected costs paid in the development
and deployment of a system, including: data, expert knowledge, human oversight,
software resources, computing cycles, hardware and network facilities,
development time, etc. These costs are paid throughout the life cycle of an AI
system, fall differentially on different individuals, and vary in magnitude
depending on the replicability and generality of the AI solution. The
multidimensional performance and cost space can be collapsed to a single
utility metric for a user with transitive and complete preferences. Even absent
a single utility function, AI advances can be generically assessed by whether
they expand the Pareto (optimal) surface. We explore a subset of these
neglected dimensions using the two case studies of Alpha* and ALE. This
broadened conception of progress in AI should lead to novel ways of measuring
success in AI, and can help set milestones for future progress
¿Debemos temer a la inteligencia artificial?: análisis en profundidad
El ISBN corresponde a la versión electrónica del documentoDesde hace ya algunos años, la inteligencia artificial (IA) ha estado cobrando impulso. Una oleada de programas que sacan el máximo rendimiento a los procesadores de última generación están obteniendo resultandos espectaculares. Una de las aplicaciones más destacadas de la IA es el reconocimiento de voz: si bien los primeros modelos eran extraños y se caracterizaban por defectos constantes, ahora son capaces de responder correctamente a todo tipo de solicitudes de los usuarios en las más diversas situaciones. En el ámbito del reconocimiento de imagen también se están logrando avances notables, con programas capaces de reconocer figuras —e incluso gatos— en vÃdeos en lÃnea que ahora se están adaptando para que el software controle los coches autónomos que invadirán nuestras calles en los próximos años. A dÃa de hoy no podemos imaginar un futuro en Europa sin una IA avanzada que influya cada vez en más facetas de nuestra vida, desde el trabajo a la medicina, y desde la educación a las relaciones interpersonales
Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings
Foundation models could eventually introduce several pathways for undermining
state security: accidents, inadvertent escalation, unintentional conflict, the
proliferation of weapons, and the interference with human diplomacy are just a
few on a long list. The Confidence-Building Measures for Artificial
Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley
Risk and Security Lab at the University of California brought together a
multistakeholder group to think through the tools and strategies to mitigate
the potential risks introduced by foundation models to international security.
Originating in the Cold War, confidence-building measures (CBMs) are actions
that reduce hostility, prevent conflict escalation, and improve trust between
parties. The flexibility of CBMs make them a key instrument for navigating the
rapid changes in the foundation model landscape. Participants identified the
following CBMs that directly apply to foundation models and which are further
explained in this conference proceedings: 1. crisis hotlines 2. incident
sharing 3. model, transparency, and system cards 4. content provenance and
watermarks 5. collaborative red teaming and table-top exercises and 6. dataset
and evaluation sharing. Because most foundation model developers are
non-government entities, many CBMs will need to involve a wider stakeholder
community. These measures can be implemented either by AI labs or by relevant
government actors
We are all one together : peer educators\u27 views about falls prevention education for community-dwelling older adults - a qualitative study
Background: Falls are common in older people. Despite strong evidence for effective falls prevention strategies, there appears to be limited translation of these strategies from research to clinical practice. Use of peers in delivering falls prevention education messages has been proposed to improve uptake of falls prevention strategies and facilitate translation to practice. Volunteer peer educators often deliver educational presentations on falls prevention to community-dwelling older adults. However, research evaluating the effectiveness of peer-led education approaches in falls prevention has been limited and no known study has evaluated such a program from the perspective of peer educators involved in delivering the message. The purpose of this study was to explore peer educators’ perspective about their role in delivering peer-led falls prevention education for community-dwelling older adults.
Methods: A two-stage qualitative inductive constant comparative design was used.In stage one (core component) focus group interviews involving a total of eleven participants were conducted. During stage two (supplementary component) semi-structured interviews with two participants were conducted. Data were analysed thematically by two researchers independently. Key themes were identified and findings were displayed in a conceptual framework.
Results: Peer educators were motivated to deliver educational presentations and importantly, to reach an optimal peer connection with their audience. Key themes identified included both personal and organisational factors that impact on educators’ capacity to facilitate their peers’ engagement with the message. Personal factors that facilitated message delivery and engagement included peer-to-peer connection and perceived credibility, while barriers included a reluctance to accept the message that they were at risk of falling by some members in the audience. Organisational factors, including ongoing training for peer educators and formative feedback following presentations, were perceived as essential because they affect successful message delivery.
Conclusions: Peer educators have the potential to effectively deliver falls prevention education to older adults and influence acceptance of the message as they possess the peer-to-peer connection that facilitates optimal engagement. There is a need to consider incorporating learnings from this research into a formal large scale evaluation of the effectiveness of the peer education approach in reducing falls in older adults
Frontier AI Regulation: Managing Emerging Risks to Public Safety
Advanced AI models hold the promise of tremendous benefits for humanity, but
society needs to proactively manage the accompanying risks. In this paper, we
focus on what we term "frontier AI" models: highly capable foundation models
that could possess dangerous capabilities sufficient to pose severe risks to
public safety. Frontier AI models pose a distinct regulatory challenge:
dangerous capabilities can arise unexpectedly; it is difficult to robustly
prevent a deployed model from being misused; and, it is difficult to stop a
model's capabilities from proliferating broadly. To address these challenges,
at least three building blocks for the regulation of frontier models are
needed: (1) standard-setting processes to identify appropriate requirements for
frontier AI developers, (2) registration and reporting requirements to provide
regulators with visibility into frontier AI development processes, and (3)
mechanisms to ensure compliance with safety standards for the development and
deployment of frontier AI models. Industry self-regulation is an important
first step. However, wider societal discussions and government intervention
will be needed to create standards and to ensure compliance with them. We
consider several options to this end, including granting enforcement powers to
supervisory authorities and licensure regimes for frontier AI models. Finally,
we propose an initial set of safety standards. These include conducting
pre-deployment risk assessments; external scrutiny of model behavior; using
risk assessments to inform deployment decisions; and monitoring and responding
to new information about model capabilities and uses post-deployment. We hope
this discussion contributes to the broader conversation on how to balance
public safety risks and innovation benefits from advances at the frontier of AI
development.Comment: Update July 11th: - Added missing footnote back in. - Adjusted author
order (mistakenly non-alphabetical among the first 6 authors) and adjusted
affiliations (Jess Whittlestone's affiliation was mistagged and Gillian
Hadfield had SRI added to her affiliations) Updated September 4th: Various
typo