10,971 research outputs found

    Exploring the role of AI in classifying, analyzing, and generating case reports on assisted suicide cases: feasibility and ethical implications

    Get PDF
    This paper presents a study on the use of AI models for the classification of case reports on assisted suicide procedures. The database of the five Dutch regional bioethics committees was scraped to collect the 72 case reports available in English. We trained several AI models for classification according to the categories defined by the Dutch Termination of Life on Request and Assisted Suicide (Review Procedures) Act. We also conducted a related project to fine-tune an OpenAI GPT-3.5-turbo large language model for generating new fictional but plausible cases. As AI is increasingly being used for judgement, it is possible to imagine an application in decision-making regarding assisted suicide. Here we explore two arising questions: feasibility and ethics, with the aim of contributing to a critical assessment of the potential role of AI in decision-making in highly sensitive areas

    Democratizing Knowledge Creation Through Human-AI Collaboration in Academic Peer Review

    Get PDF
    In the rapidly evolving landscape of academic research, artificial intelligence (AI) is poised to revolutionize traditional academic peer review processes and knowledge evaluation systems. We believe that the growing collaboration between humans and AI will disrupt how academics assess scholarly manuscripts and disseminate published works in a way that facilitates the closing of gaps among diverse scholars as well as competing scholarly traditions. Such human-AI collaboration is not a distant reality but is unfolding before us, in part, through the development, application, and actual use of AI, including language learning models (LLMs). This opinion piece focuses on the academic peer review process. It offers preliminary ideas on how human-AI collaboration will likely change the peer review process, highlights the benefits, identifies possible bottlenecks, and underscores the potential for democratizing academic culture worldwide

    Semi-Automation in Video Editing

    Get PDF
    Semi-automasjon i video redigering Hvordan kan vi bruke kunstig intelligens (KI) og maskin læring til å gjøre videoredigering like enkelt som å redigere tekst? I denne avhandlingen vil jeg adressere problemet med å bruke KI i videoredigering fra et Menneskelig-KI interaksjons perspektiv, med fokus på å bruke KI til å støtte brukerne. Video er et audiovisuelt medium. Redigere videoer krever synkronisering av både det visuelle og det auditive med presise operasjoner helt ned på millisekund nivå. Å gjøre dette like enkelt som å redigere tekst er kanskje ikke mulig i dag. Men hvordan skal vi da støtte brukerne med KI og hva er utfordringene med å gjøre det? Det er fem hovedspørsmål som har drevet forskningen i denne avhandlingen. Hva er dagens "state-of-the-art" i KI støttet videoredigering? Hva er behovene og forventningene av fagfolkene om KI? Hva er påvirkningen KI har på effektiviteten og nøyaktigheten når det blir brukt på teksting? Hva er endringene i brukeropplevelsen når det blir brukt KI støttet teksting? Hvordan kan flere KI metoder bli brukt for å støtte beskjærings- og panoreringsoppgaver? Den første artikkelen av denne avhandlingen ga en syntese og kritisk gjennomgang av eksisterende arbeid med KI-baserte verktøy for videoredigering. Artikkelen ga også noen svar på hvordan og hva KI kan bli brukt til for å støtte brukere ved en undersøkelse utført av 14 fagfolk. Den andre studien presenterte en prototype av KI-støttet videoredigerings verktøy bygget på et eksisterende videoproduksjons program. I tillegg kom det en evaluasjon av både ytelse og brukeropplevelse på en KI-støttet teksting fra 24 nybegynnere. Den tredje studien beskrev et idiom-basert verktøy for å konvertere bredskjermsvideoer lagd for TV til smalere størrelsesforhold for mobil og sosiale medieplattformer. Den tredje studien utforsker også nye metoder for å utøve beskjæring og panorering ved å bruke fem forskjellige KI-modeller. Det ble også presentert en evaluering fra fem brukere. I denne avhandlingen brukte vi en brukeropplevelse og oppgave basert framgangsmåte, for å adressere det semi-automatiske i videoredigering.How can we use artificial intelligence (AI) and machine learning (ML) to make video editing as easy as "editing text''? In this thesis, this problem of using AI to support video editing is explored from the human--AI interaction perspective, with the emphasis on using AI to support users. Video is a dual-track medium with audio and visual tracks. Editing videos requires synchronization of these two tracks and precise operations at milliseconds. Making it as easy as editing text might not be currently possible. Then how should we support the users with AI, and what are the current challenges in doing so? There are five key questions that drove the research in this thesis. What is the start of the art in using AI to support video editing? What are the needs and expectations of video professionals from AI? What are the impacts on efficiency and accuracy of subtitles when AI is used to support subtitling? What are the changes in user experience brought on by AI-assisted subtitling? How can multiple AI methods be used to support cropping and panning task? In this thesis, we employed a user experience focused and task-based approach to address the semi-automation in video editing. The first paper of this thesis provided a synthesis and critical review of the existing work on AI-based tools for videos editing and provided some answers to how should and what more AI can be used in supporting users by a survey of 14 video professional. The second paper presented a prototype of AI-assisted subtitling built on a production grade video editing software. It is the first comparative evaluation of both performance and user experience of AI-assisted subtitling with 24 novice users. The third work described an idiom-based tool for converting wide screen videos made for television to narrower aspect ratios for mobile social media platforms. It explores a new method to perform cropping and panning using five AI models, and an evaluation with 5 users and a review with a professional video editor were presented.Doktorgradsavhandlin

    Exploring the applicability of machine learning based artificial intelligence in the analysis of cardiovascular imaging

    Get PDF
    Worldwide, the prevalence of cardiovascular diseases has doubled, demanding new diagnostic tools. Artificial intelligence, especially machine learning and deep learning, offers innovative possibilities for medical research. Despite historical challenges, such as a lack of data, these techniques have potential for cardiovascular research. This thesis explores the application of machine learning and deep learning in cardiology, focusing on automation and decision support in cardiovascular imaging.Part I of this thesis focuses on automating cardiovascular MRI analysis. A deep learning model was developed to analyze the ascending aorta in cardiovascular MRI images. The model's results were used to investigate connections between genetic material and aortic properties, and between aortic properties and cardiovascular diseases and mortality. A second model was developed to select MRI images suitable for analyzing the pulmonary artery.Part II focuses on decision support in nuclear cardiovascular imaging. A first machine learning model was developed to predict myocardial ischemia based on CTA variables. In addition, a deep neural network was used to identify reduced oxygen supply through the arteries supplying oxygen-rich blood to the heart and cardiovascular risk features using PET images.This thesis successfully explores the possibilities of machine learning and deep learning in cardiovascular research, with a focus on automated analysis and decision support

    Exploring the applicability of machine learning based artificial intelligence in the analysis of cardiovascular imaging

    Get PDF
    Worldwide, the prevalence of cardiovascular diseases has doubled, demanding new diagnostic tools. Artificial intelligence, especially machine learning and deep learning, offers innovative possibilities for medical research. Despite historical challenges, such as a lack of data, these techniques have potential for cardiovascular research. This thesis explores the application of machine learning and deep learning in cardiology, focusing on automation and decision support in cardiovascular imaging.Part I of this thesis focuses on automating cardiovascular MRI analysis. A deep learning model was developed to analyze the ascending aorta in cardiovascular MRI images. The model's results were used to investigate connections between genetic material and aortic properties, and between aortic properties and cardiovascular diseases and mortality. A second model was developed to select MRI images suitable for analyzing the pulmonary artery.Part II focuses on decision support in nuclear cardiovascular imaging. A first machine learning model was developed to predict myocardial ischemia based on CTA variables. In addition, a deep neural network was used to identify reduced oxygen supply through the arteries supplying oxygen-rich blood to the heart and cardiovascular risk features using PET images.This thesis successfully explores the possibilities of machine learning and deep learning in cardiovascular research, with a focus on automated analysis and decision support

    Harmonizing Global Voices: Culturally-Aware Models for Enhanced Content Moderation

    Full text link
    Content moderation at scale faces the challenge of considering local cultural distinctions when assessing content. While global policies aim to maintain decision-making consistency and prevent arbitrary rule enforcement, they often overlook regional variations in interpreting natural language as expressed in content. In this study, we are looking into how moderation systems can tackle this issue by adapting to local comprehension nuances. We train large language models on extensive datasets of media news and articles to create culturally attuned models. The latter aim to capture the nuances of communication across geographies with the goal of recognizing cultural and societal variations in what is considered offensive content. We further explore the capability of these models to generate explanations for instances of content violation, aiming to shed light on how policy guidelines are perceived when cultural and societal contexts change. We find that training on extensive media datasets successfully induced cultural awareness and resulted in improvements in handling content violations on a regional basis. Additionally, these advancements include the ability to provide explanations that align with the specific local norms and nuances as evidenced by the annotators' preference in our conducted study. This multifaceted success reinforces the critical role of an adaptable content moderation approach in keeping pace with the ever-evolving nature of the content it oversees.Comment: 12 pages, 8 Figures. Supplementary materia

    Connecting AR and BIM: a Prototype App

    Get PDF
    This contribution discusses an ongoing project integrating information modeling and immersive tech nologies for the built space, in particular augmented reality (AR). We examined tools and procedures to quickly recognize the equipment present on telecommunication network sites and access the cor responding components on a digital information model. A first phase of the project, recently complet ed, produced an app prototype for mobile devices capable of showing a 1:1 scale AR representation on-site. The project highlights current limitations and opportunities in making the interaction between AR and building information modeling (BIM) technologies fully scalable

    ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health

    Get PDF
    : Large Language Models (LLMs) have recently gathered attention with the release of ChatGPT, a user-centered chatbot released by OpenAI. In this perspective article, we retrace the evolution of LLMs to understand the revolution brought by ChatGPT in the artificial intelligence (AI) field. The opportunities offered by LLMs in supporting scientific research are multiple and various models have already been tested in Natural Language Processing (NLP) tasks in this domain. The impact of ChatGPT has been huge for the general public and the research community, with many authors using the chatbot to write part of their articles and some papers even listing ChatGPT as an author. Alarming ethical and practical challenges emerge from the use of LLMs, particularly in the medical field for the potential impact on public health. Infodemic is a trending topic in public health and the ability of LLMs to rapidly produce vast amounts of text could leverage misinformation spread at an unprecedented scale, this could create an "AI-driven infodemic," a novel public health threat. Policies to contrast this phenomenon need to be rapidly elaborated, the inability to accurately detect artificial-intelligence-produced text is an unresolved issue
    corecore