21 research outputs found
A privacy-preserving technique to identify the useful content of documents owned by multiple institutes: Supplemental File
ICADL2023 supplemental fil
電子化診療記録の計算機処理における特徴と課題に関する研究 : e-phenotyping技術確立に向けて
学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 小山 博史, 東京大学教授 赤林 朗, 東京大学准教授 槇野 陽介, 東京大学教授 鄭 雄一, 東京大学特任准教授 脇 裕典University of Tokyo(東京大学
An experimental framework for designing document structure for users' decision making -- An empirical study of recipes
Textual documents need to be of good quality to ensure effective asynchronous
communication in remote areas, especially during the COVID-19 pandemic.
However, defining a preferred document structure (content and arrangement) for
improving lay readers' decision-making is challenging. First, the types of
useful content for various readers cannot be determined simply by gathering
expert knowledge. Second, methodologies to evaluate the document's usefulness
from the user's perspective have not been established. This study proposed the
experimental framework to identify useful contents of documents by aggregating
lay readers' insights. This study used 200 online recipes as research subjects
and recruited 1,340 amateur cooks as lay readers. The proposed framework
identified six useful contents of recipes. Multi-level modeling then showed
that among the six identified contents, suitable ingredients or notes arranged
with a subheading at the end of each cooking step significantly increased
recipes' usefulness. Our framework contributes to the communication design via
documents
Recommended from our members
The Impact of the Balance between Trust in Advice and Confidence in Human Judgment on Advice Utilization
The extent to which people utilize advice from others differs depending on whether the source of the advice is an algorithm or a human. However, no unifying evidence can be used for advice design. Moreover, the use of advice given at intervals (e.g., 70–90%) has not been fully studied. This study proposed a three-step model of the cognitive process of the use of advice with intervals and conducted a simulation and four behavioral experiments. These experiments showed that differences in advice sources affected the cognitive process in which judges decide whether to update their initial judgment based on the advice; this cognitive process was influenced by the relative weight between their initial judgment and the advice interval. These results suggested that for judges to adjust their judgments, designing advice itself is insufficient and advice must be designed according to the relationship between the advice and judge’s judgments
Recommended from our members
A one-second wait improves judgment accuracy: A mouse tracking reveals cognitive processes during choice behaviors
It is generally difficult for people to make rational and accurate judgments under their limited cognitive resources. In this study, we propose an intervention to easily improve people’s judgmental accuracy with less workload by waiting for a short time at the beginning of a task. By using a simple binary choice task, we found that when a short (1s) waiting time was inserted, participants showed higher accuracy than when no waiting time was inserted, and they felt less mental workload than when a longer (2.5s) waiting time was inserted. To examine the underlying implicit cognitive processes, we applied mouse tracking approaches during choice behaviors. We found that the inserted time enhanced participants’ change of mind (i.e., they amended their initial wrong judgments). These results suggest that making people wait for only 1s will serve as a simple, effective, and resource-rational intervention to boost people’s accuracy of judgments. Because of its simplicity, we believe that this intervention has potential to be applied in various fields
The Impact of the Balance between Trust in Advice and Confidence in Human Judgment on Advice Utilization
The extent to which people utilize advice from others differs depending on whether the source of the advice is an algorithm or a human. However, no unifying evidence can be used for advice design. Moreover, the use of advice given at intervals (e.g., 70–90%) has not been fully studied. This study proposed a three-step model of the cognitive process of the use of advice with intervals and conducted a simulation and four behavioral experiments (N = 473). These experiments showed that differences in advice sources affected the cognitive process in which judges decide whether to update their initial judgment based on the advice; this cognitive process was influenced by the relative weight between their initial judgment and the advice interval. These results suggested that for judges to adjust their judgments, designing advice itself (interval or advice source) is insufficient and advice must be designed according to the relationship between the advice and judge’s judgments
Recommended from our members
Cognitive Load In Speed-Accuracy Tradeoff: Theoretical and Empirical Evidence Based on Resource-Rational Analyses
In simple judgment tasks, it is generally assumed that thinking for longer leads to more accurate judgments, providing better benefits as suggested by the speed-accuracy tradeoff framework. However, human cognitive resources are limited, and longer thinking induces cognitive costs such as subjective workload. Therefore, a total benefit should be considered under the tradeoff between thinking benefits (i.e., improving accuracy) and thinking costs (i.e., increasing cognitive load) as suggested by the resource rationality framework. We examined this issue using computer simulations and behavioral experiments. Our simulations showed that, if a thinking cost was introduced based on resource-rational approaches, there was an optimal length of time for maximizing a total benefit and the total benefit gradually decreased there. In addition, our experiments demonstrated that judgment accuracy did not always improve even if participants were provided a longer thinking time; conversely, longer thinking time was likely to increase their subjective workload. These results are consistent with resource rationality rather than speed-accuracy tradeoff. The importance of considering cognitive load is suggested to further understand human intelligence in the context of a speed-accuracy tradeoff
The nature of anchor-biased estimates and its application to the wisdom of crowds
We propose a method to achieve better wisdom of crowds by utilizing anchoring effects. In this method, people are first asked to make a comparative judgment such as “Is the number of new COVID-19 infections one month later more or less than 10 (or 200,000)?” As in this example, two sufficiently different anchors (e.g., “10” or “200,000”) are set in the comparative judgment. After this comparative judgment, people are asked to make their own estimates. These estimates are then aggregated. We hypothesized that the aggregated estimates using this method would be more accurate than those without anchor presentation. To examine the effectiveness of the proposed method, we conducted three studies: a computer simulation and two behavioral experiments (numerical estimation of perceptual stimuli and estimation of new COVID-19 infections by physicians). Through computer simulations, we could identify situations in which the proposed method is effective. Although the proposed method is not always effective (e.g., when a group can make fairly accurate estimations), on average, the proposed method is more likely to achieve better wisdom of crowds. In particular, when a group cannot make accurate estimations (i.e., shows biases such as overestimation or underestimation), the proposed method can achieve better wisdom of crowds. The results of the behavioral experiments were consistent with the computer simulation findings. The proposed method achieved better wisdom of crowds. We discuss new insights into anchoring effects and methods for inducing diverse opinions from group members