74,894 research outputs found
Building the case for actionable ethics in digital health research supported by artificial intelligence
The digital revolution is disrupting the ways in which health research is conducted, and subsequently, changing healthcare. Direct-to-consumer wellness products and mobile apps, pervasive sensor technologies and access to social network data offer exciting opportunities for researchers to passively observe and/or track patients ‘in the wild’ and 24/7. The volume of granular personal health data gathered using these technologies is unprecedented, and is increasingly leveraged to inform personalized health promotion and disease treatment interventions. The use of artificial intelligence in the health sector is also increasing. Although rich with potential, the digital health ecosystem presents new ethical challenges for those making decisions about the selection, testing, implementation and evaluation of technologies for use in healthcare. As the ‘Wild West’ of digital health research unfolds, it is important to recognize who is involved, and identify how each party can and should take responsibility to advance the ethical practices of this work. While not a comprehensive review, we describe the landscape, identify gaps to be addressed, and offer recommendations as to how stakeholders can and should take responsibility to advance socially responsible digital health research
Exploring entertainment medicine and professionalization of self-care : interview study among doctors on the potential effects of digital self-tracking
Background: Nowadays, digital self-tracking devices offer a plethora of possibilities to both healthy and chronically ill users who want to closely examine their body. This study suggests that self-tracking in a private setting will lead to shifting understandings in professional care. To provide more insight into these shifts, this paper seeks to lay bare the promises and challenges of self-tracking while staying close to the everyday professional experience of the physician.
Objective: The aim of this study was to (1) offer an analysis of how medical doctors evaluate self-tracking methods in their practice and (2) explore the anticipated shifts that digital self-care will bring about in relation to our findings and those of other studies.
Methods: A total of 12 in-depth semistructured interviews with general practitioners (GPs) and cardiologists were conducted in Flanders, Belgium, from November 2015 to November 2016. Thematic analysis was applied to examine the transcripts in an iterative process.
Results: Four major themes arose in our body of data: (1) the patient as health manager, (2) health obsession and medicalization, (3) information management, and (4) shifting roles of the doctors and impact on the health care organization. Our research findings show a nuanced understanding of the potentials and pitfalls of different forms of self-tracking. The necessity of contextualization of self-tracking data and a professionalization of self-care through digital devices come to the fore as important overarching concepts.
Conclusions: This interview study with Belgian doctors examines the potentials and challenges of self-monitoring while focusing on the everyday professional experience of the physician. The dialogue between our dataset and the existing literature affords a fine-grained image of digital self-care and its current meaning in a medical-professional landscape
Privacy and Accountability in Black-Box Medicine
Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information.
This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy
Twenty questions about design behavior for sustainability, report of the International Expert Panel on behavioral science for design
How behavioral scientists, engineers, and architects can work together to
advance how we all understand and practice design—in order to enhance
sustainability in the built environment, and beyond.https://www.nature.com/documents/design_behavior_for_sustainability.pdfPublished versio
Recommended from our members
ADVANCING TELEHEALTH THROUGH ARTIFICIAL INTELLIGENCE: INCORPORATING EMOTIONAL INTELLIGENCE AND ADDRESSING CYBERSECURITY CHALLENGES
This culminating experience project explores the integration of Emotional Artificial Intelligence (Emotional AI) into telehealth systems, addressing the dual challenges of enhancing patient care and mitigating cybersecurity risks. The research questions are: (Q1) How can Emotionally Intelligent AI improve telehealth systems\u27 ability to recognize and respond to mental health symptoms? and (Q2) What are the specific cybersecurity challenges associated with AI in telehealth and how can they be mitigated? The findings for each question are: Q1: Emotionally Intelligent AI can significantly enhance telehealth by providing personalized, empathetic interactions that improve patient engagement, adherence to treatment plans, and early detection of mental health issues. AI-driven chatbots and virtual assistants, equipped with natural language processing and machine learning capabilities, offer real-time emotional support and tailored therapeutic interventions, leading to improved mental health outcomes. Q2: integrating AI into telehealth presents significant cybersecurity challenges, including data privacy issues, unauthorized access, and the necessity for robust encryption and access control measures. Addressing these challenges requires a multi-faceted approach, including the implementation of standardized cybersecurity protocols, continuous monitoring, and adherence to regulatory frameworks such as HIPAA and GDPR. The conclusions drawn from this study are twofold: (1) Emotionally Intelligent AI holds substantial promise for enhancing the effectiveness and empathy of telehealth services, particularly in mental health care, and (2) Effective mitigation of cybersecurity risks is crucial for the safe and ethical deployment of AI in telehealth. Future research should focus on long-term studies to assess the sustained impact of Emotional AI on patient outcomes and the development of advanced cybersecurity measures tailored to the unique needs of AI-driven telehealth systems
Equity in the Digital Age: How Health Information Technology Can Reduce Disparities
While enormous medical and technological advancements have been made over the last century, it is only very recently that there have been similar rates of development in the field of health information technology (HIT).This report examines some of the advancements in HIT and its potential to shape the future health care experiences of consumers. Combined with better data collection, HIT offers signi?cant opportunities to improve access to care, enhance health care quality, and create targeted strategies that help promote health equity. We must also keep in mind that technology gaps exist, particularly among communities of color, immigrants, and people who do not speak English well. HIT implementation must be done in a manner that responds to the needs of all populations to make sure that it enhances access, facilitates enrollment, and improves quality in a way that does not exacerbate existing health disparities for the most marginalized and underserved
Utilization of Media-Driven Technology for Health Promotion and Risk Reduction among American Indian and Alaska Native Young Adults: An Exploratory Study
Across the developmental spectrum, American Indian and Alaska Native (AI/AN) adolescents and young adults experience considerable behavioral and mental health disparities, including substance abuse, depression, and engagement in sexual behaviors which enhance risk of pregnancy and sexually transmitted infections. Health-focused interventions utilizing digital and media technology hold significant promise among tribal communities, as they have the capacity to eliminate geography-based barriers. Utilizing a sample of 210 self-identified AI/AN students attending tribal colleges, this study identified the most effective technologies and intervention strategies, as well as health seeking patterns and preferences, which may impact implementation and sustainable use in tribal settings. The use of technology was both diverse and pervasive among AI/AN young adults, mirroring or exceeding patterns of young adults from the broader population. These data suggest that technology-based interventions may effectively deliver information, resources, and behavior change tools to AI/AN young adults, particularly when reflecting their unique worldviews and social contexts
- …