37 research outputs found

    How basic-level objects facilitate question-asking in a categorization task

    Get PDF
    The ability to categorize information is essential to everyday tasks such as identifying the cause of an event given a set of likely explanations or pinpointing the correct from a set of possible diagnoses by sequentially probing questions. In three studies, we investigated how the level of inclusiveness at which objects are presented (basic-level vs. subordinate- level) influences children's (7- and 10-year-olds) and adults' performance in a sequential binary categorization task. Study 1 found a robust facilitating effect of basic-level objects on the ability to ask effective questions in a computerized version of the Twenty Questions game. Study 2 suggested that this facilitating effect might be due to the kinds of object-differentiating features participants generate when provided with basic-level as compared to subordinate-level objects. Study 3 ruled out the alternative hypothesis that basic-level objects facilitate the selection of the most efficient among a given set of features

    Toward a framework for selecting behavioural policies: How to choose between boosts and nudges

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.In this paper, we analyse the difference between two types of behavioural policies – nudges and boosts. We distinguish them on the basis of the mechanisms through which they are expected to operate and identify the contextual conditions that are necessary for each policy to be successful. Our framework helps judging which type of policy is more likely to bring about the intended behavioural outcome in a given situation

    The Triage Capability of Laypersons: Retrospective Exploratory Analysis

    Get PDF
    Background: Although medical decision-making may be thought of as a task involving health professionals, many decisions, including critical health-related decisions are made by laypersons alone. Specifically, as the first step to most care episodes, it is the patient who determines whether and where to seek health care (triage). Overcautious self-assessments (ie, overtriaging) may lead to overutilization of health care facilities and overcrowded emergency departments, whereas imprudent decisions (ie, undertriaging) constitute a risk to the patient's health. Recently, patient-facing decision support systems, commonly known as symptom checkers, have been developed to assist laypersons in these decisions. Objective: The purpose of this study is to identify factors influencing laypersons' ability to self-triage and their risk averseness in self-triage decisions. Methods: We analyzed publicly available data on 91 laypersons appraising 45 short fictitious patient descriptions (case vignettes; N=4095 appraisals). Using signal detection theory and descriptive and inferential statistics, we explored whether the type of medical decision laypersons face, their confidence in their decision, and sociodemographic factors influence their triage accuracy and the type of errors they make. We distinguished between 2 decisions: whether emergency care was required (decision 1) and whether self-care was sufficient (decision 2). Results: The accuracy of detecting emergencies (decision 1) was higher (mean 82.2%, SD 5.9%) than that of deciding whether any type of medical care is required (decision 2, mean 75.9%, SD 5.25%; t>90=8.4; P89=3.7; P<.001; d=0.39). Conclusions: Our study suggests that laypersons are overcautious in deciding whether they require medical care at all, but they miss identifying a considerable portion of emergencies. Our results further indicate that women are more risk averse than men in both types of decisions. Layperson participants made most triage errors when they were certain of their own appraisal. Thus, they might not follow or even seek advice (eg, from symptom checkers) in most instances where advice would be useful

    Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition

    Get PDF
    Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training

    Wann braucht Ethnografie eine Einverständniserklärung? Praktische Antworten auf ethische Fragen zu ethnografischen Methoden in der HCI-Forschung

    Get PDF
    Die Forschung im Bereich der Mensch-Computer-Interaktion (HCI) nutzt ein zunehmend breiter werdendes Methodenspektrum für eine sich immer weiter auffächernde Bandbreite von Forschungsfeldern. Zahlreiche Studien verlassen für die Datenerhebung den vertraut-kontrollierbaren Kosmos von Laboren und Versuchsständen, um stattdessen im Feld mehr über das Verhalten einer NutzerInnengruppe im „natürlichen“ Kontext zu erfahren. Für jede Forschung gelten unabhängig von Feld und Methode die forschungsethischen Grundprinzipien der Freiwilligkeit, Benefizienz und Gerechtigkeit. Um das Freiwilligkeits-Prinzips in der Forschungspraxis zu gewährleisten, stellt der Einsatz von Einverständniserklärungen als informierte Zustimmung bzw. Informed Consent (IC) einen kritischen Punkt für jedes Studiendesign dar. Für viele qualitative Methoden besteht in Bezug auf das Gebot des IC in der HCI Forschung eine direkte Analogie zu der etablierten Ethikpraxis für quantitative Methoden. Die Ethnografie nimmt hier jedoch eine gewisse Sonderstellung ein. Begründet in ihrem methodischen Kernansatz der in-situ Beobachtung stellt insbesondere das Thema IC immer wieder eine ethische und forschungspraktische Herausforderung dar, da es bei einer feldbasierten und damit interaktionsoffenen Forschung schwieriger ist festzustellen, welche der beteiligten Personen als direkte ForschungsteilnehmerInnen zu konzeptualisieren sind bzw. von welchen Personen ein IC in welcher Form gebraucht wird. Dieser Artikel rückt die Frage nach einem sinnvollen und ethisch korrekten Einsatz von IC in ethnografischen Studien im HCI Bereich ins Zentrum der Betrachtung. Mit der Skala der situationsangemessenen Privatsphärenerwartung und IC (SPIC-Skala) wird ein praxistauglicher Lösungsansatz vorgestellt, der sich bereits in zahlreichen Forschungsprojekten im HCI Kontext bewährt hat. Kernargument der SPIC-Skala ist, dass Forschende sich in ihren IC Maßnahmen an den situationsabhängigen Privatsphärenerwartungen von beteiligten Personen orientieren sollten. Eine solche Wahrung der Privatsphärenerwartungen sehen wir als forschungspraktische Operationalisierung des Freiwilligkeits-Prinzips in offenen Forschungssituationen. Dass ein solches Schema jedoch kein „Freifahrtschein“ sein kann, und immer wieder aufs Neue für den eigenen Kontext geprüft werden muss, wird in einem abschließenden Fazit diskutiert

    Interactive Versus Static Decision Support Tools for COVID-19: Randomized Controlled Trial

    Get PDF
    Background: During the COVID-19 pandemic, medical laypersons with symptoms indicative of a COVID-19 infection commonly sought guidance on whether and where to find medical care. Numerous web-based decision support tools (DSTs) have been developed, both by public and commercial stakeholders, to assist their decision making. Though most of the DSTs’ underlying algorithms are similar and simple decision trees, their mode of presentation differs: some DSTs present a static flowchart, while others are designed as a conversational agent, guiding the user through the decision tree’s nodes step-by-step in an interactive manner. Objective: This study aims to investigate whether interactive DSTs provide greater decision support than noninteractive (ie, static) flowcharts. Methods: We developed mock interfaces for 2 DSTs (1 static, 1 interactive), mimicking patient-facing, freely available DSTs for COVID-19-related self-assessment. Their underlying algorithm was identical and based on the Centers for Disease Control and Prevention’s guidelines. We recruited adult US residents online in November 2020. Participants appraised the appropriate social and care-seeking behavior for 7 fictitious descriptions of patients (case vignettes). Participants in the experimental groups received either the static or the interactive mock DST as support, while the control group appraised the case vignettes unsupported. We determined participants’ accuracy, decision certainty (after deciding), and mental effort to measure the quality of decision support. Participants’ ratings of the DSTs’ usefulness, ease of use, trust, and future intention to use the tools served as measures to analyze differences in participants’ perception of the tools. We used ANOVAs and t tests to assess statistical significance. Results: Our survey yielded 196 responses. The mean number of correct assessments was higher in the intervention groups (interactive DST group: mean 11.71, SD 2.37; static DST group: mean 11.45, SD 2.48) than in the control group (mean 10.17, SD 2.00). Decisional certainty was significantly higher in the experimental groups (interactive DST group: mean 80.7%, SD 14.1%; static DST group: mean 80.5%, SD 15.8%) compared to the control group (mean 65.8%, SD 20.8%). The differences in these measures proved statistically significant in t tests comparing each intervention group with the control group (P<.001 for all 4 t tests). ANOVA detected no significant differences regarding mental effort between the 3 study groups. Differences between the 2 intervention groups were of small effect sizes and nonsignificant for all 3 measures of the quality of decision support and most measures of participants’ perception of the DSTs. Conclusions: When the decision space is limited, as is the case in common COVID-19 self-assessment DSTs, static flowcharts might prove as beneficial in enhancing decision quality as interactive tools. Given that static flowcharts reveal the underlying decision algorithm more transparently and require less effort to develop, they might prove more efficient in providing guidance to the public. Further research should validate our findings on different use cases, elaborate on the trade-off between transparency and convenience in DSTs, and investigate whether subgroups of users benefit more with 1 type of user interface than the other. Trial Registration: Deutsches Register Klinischer Studien DRKS00028136; https://tinyurl.com/4bcfausx (retrospectively registered

    Ultrasound in augmented reality: a mixed-methods evaluation of head-mounted displays in image-guided interventions

    Get PDF
    Purpose: Augmented reality (AR) and head-mounted displays (HMD) in medical practice are current research topics. A commonly proposed use case of AR-HMDs is to display data in image-guided interventions. Although technical feasibility has been thoroughly shown, effects of AR-HMDs on interventions are not yet well researched, hampering clinical applicability. Therefore, the goal of this study is to better understand the benefits and limitations of this technology in ultrasound-guided interventions. Methods: We used an AR-HMD system (based on the first-generation Microsoft Hololens) which overlays live ultrasound images spatially correctly at the location of the ultrasound transducer. We chose ultrasound-guided needle placements as a representative task for image-guided interventions. To examine the effects of the AR-HMD, we used mixed methods and conducted two studies in a lab setting: (1) In a randomized crossover study, we asked participants to place needles into a training model and evaluated task duration and accuracy with the AR-HMD as compared to the standard procedure without visual overlay and (2) in a qualitative study, we analyzed the user experience with AR-HMD using think-aloud protocols during ultrasound examinations and semi-structured interviews after the task. Results: Participants (n = 20) placed needles more accurately (mean error of 7.4 mm vs. 4.9 mm, p = 0.022) but not significantly faster (mean task duration of 74.4 s vs. 66.4 s, p = 0.211) with the AR-HMD. All participants in the qualitative study (n = 6) reported limitations of and unfamiliarity with the AR-HMD, yet all but one also clearly noted benefits and/or that they would like to test the technology in practice. Conclusion: We present additional, though still preliminary, evidence that AR-HMDs provide benefits in image-guided procedures. Our data also contribute insights into potential causes underlying the benefits, such as improved spatial perception. Still, more comprehensive studies are needed to ascertain benefits for clinical applications and to clarify mechanisms underlying these benefits

    Determinants of Laypersons’ Trust in Medical Decision Aids: Randomized Controlled Trial

    Get PDF
    Background: Symptom checker apps are patient-facing decision support systems aimed at providing advice to laypersons on whether, where, and how to seek health care (disposition advice). Such advice can improve laypersons' self-assessment and ultimately improve medical outcomes. Past research has mainly focused on the accuracy of symptom checker apps' suggestions. To support decision-making, such apps need to provide not only accurate but also trustworthy advice. To date, only few studies have addressed the question of the extent to which laypersons trust symptom checker app advice or the factors that moderate their trust. Studies on general decision support systems have shown that framing automated systems (anthropomorphic or emphasizing expertise), for example, by using icons symbolizing artificial intelligence (AI), affects users' trust. Objective: This study aims to identify the factors influencing laypersons' trust in the advice provided by symptom checker apps. Primarily, we investigated whether designs using anthropomorphic framing or framing the app as an AI increases users' trust compared with no such framing. Methods: Through a web-based survey, we recruited 494 US residents with no professional medical training. The participants had to first appraise the urgency of a fictitious patient description (case vignette). Subsequently, a decision aid (mock symptom checker app) provided disposition advice contradicting the participants' appraisal, and they had to subsequently reappraise the vignette. Participants were randomized into 3 groups: 2 experimental groups using visual framing (anthropomorphic, 160/494, 32.4%, vs AI, 161/494, 32.6%) and a neutral group without such framing (173/494, 35%). Results: Most participants (384/494, 77.7%) followed the decision aid's advice, regardless of its urgency level. Neither anthropomorphic framing (odds ratio 1.120, 95% CI 0.664-1.897) nor framing as AI (odds ratio 0.942, 95% CI 0.565-1.570) increased behavioral or subjective trust (P=.99) compared with the no-frame condition. Even participants who were extremely certain in their own decisions (ie, 100% certain) commonly changed it in favor of the symptom checker's advice (19/34, 56%). Propensity to trust and eHealth literacy were associated with increased subjective trust in the symptom checker (propensity to trust b=0.25; eHealth literacy b=0.2), whereas sociodemographic variables showed no such link with either subjective or behavioral trust. Conclusions: Contrary to our expectation, neither the anthropomorphic framing nor the emphasis on AI increased trust in symptom checker advice compared with that of a neutral control condition. However, independent of the interface, most participants trusted the mock app's advice, even when they were very certain of their own assessment. Thus, the question arises as to whether laypersons use such symptom checkers as substitutes rather than as aids in their own decision-making. With trust in symptom checkers already high at baseline, the benefit of symptom checkers depends on interface designs that enable users to adequately calibrate their trust levels during usage
    corecore