4 research outputs found
Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems
To mitigate the negative effects of false information more effectively, the
development of automated AI (artificial intelligence) tools assisting
fact-checkers is needed. Despite the existing research, there is still a gap
between the fact-checking practitioners' needs and pains and the current AI
research. We aspire to bridge this gap by employing methods of information
behavior research to identify implications for designing better human-centered
AI-based supporting tools.
In this study, we conducted semi-structured in-depth interviews with Central
European fact-checkers. The information behavior and requirements on desired
supporting tools were analyzed using iterative bottom-up content analysis,
bringing the techniques from grounded theory. The most significant needs were
validated with a survey extended to fact-checkers from across Europe, in which
we collected 24 responses from 20 European countries, i.e., 62% active European
IFCN (International Fact-Checking Network) signatories.
Our contributions are theoretical as well as practical. First, by being able
to map our findings about the needs of fact-checking organizations to the
relevant tasks for AI research, we have shown that the methods of information
behavior research are relevant for studying the processes in the organizations
and that these methods can be used to bridge the gap between the users and AI
researchers. Second, we have identified fact-checkers' needs and pains focusing
on so far unexplored dimensions and emphasizing the needs of fact-checkers from
Central and Eastern Europe as well as from low-resource language groups which
have implications for development of new resources (datasets) as well as for
the focus of AI research in this domain.Comment: 41 pages, 13 figures, 1 table, 2 annexe
Combining teaching and research: a BIP on geophysical and archaeological prospection of North Frisian medieval settlement patterns
We performed a research-oriented EU Erasmus+ Blended Intensive Program (BIP) with participants from four countries focused on North Frisian terp settlements from Roman Iron Age and medieval times. We show that the complex terp structure and environment can be efficiently prospected using combined magnetic and EMI mapping, and seismic and geoelectric profiling and drilling. We found evidence of multiple terp phases and a harbor at the Roman Iron Age terp of Tofting. In contrast, the medieval terp of Stolthusen is more simply constructed, probably uni-phase. The BIP proved to be a suitable tool for high-level hands-on education adding value to the research conducted in on-going projects
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI