86,574 research outputs found
Moderators Of Trust And Reliance Across Multiple Decision Aids
The present work examines whether user\u27s trust of and reliance on automation, were affected by the manipulations of user\u27s perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness - a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate\u27s actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., \u27what\u27 the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users\u27 trust in automation so that system interfaces can be designed to facilitate users\u27 calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems
Security considerations in the open source software ecosystem
Open source software plays an important role in the software supply chain, allowing stakeholders to
utilize open source components as building blocks in their software, tooling, and infrastructure. But
relying on the open source ecosystem introduces unique challenges, both in terms of security and trust,
as well as in terms of supply chain reliability.
In this dissertation, I investigate approaches, considerations, and encountered challenges of stakeholders in the context of security, privacy, and trustworthiness of the open source software supply
chain. Overall, my research aims to empower and support software experts with the knowledge and
resources necessary to achieve a more secure and trustworthy open source software ecosystem. In the
first part of this dissertation, I describe a research study investigating the security and trust practices
in open source projects by interviewing 27 owners, maintainers, and contributors from a diverse set
of projects to explore their behind-the-scenes processes, guidance and policies, incident handling, and
encountered challenges, finding that participants’ projects are highly diverse in terms of their deployed
security measures and trust processes, as well as their underlying motivations. More on the consumer
side of the open source software supply chain, I investigated the use of open source components in
industry projects by interviewing 25 software developers, architects, and engineers to understand their
projects’ processes, decisions, and considerations in the context of external open source code, finding
that open source components play an important role in many of the industry projects, and that most
projects have some form of company policy or best practice for including external code. On the side of
end-user focused software, I present a study investigating the use of software obfuscation in Android
applications, which is a recommended practice to protect against plagiarism and repackaging. The
study leveraged a multi-pronged approach including a large-scale measurement, a developer survey, and
a programming experiment, finding that only 24.92% of apps are obfuscated by their developer, that
developers do not fear theft of their own apps, and have difficulties obfuscating their own apps. Lastly,
to involve end users themselves, I describe a survey with 200 users of cloud office suites to investigate
their security and privacy perceptions and expectations, with findings suggesting that users are generally
aware of basic security implications, but lack technical knowledge for envisioning some threat models.
The key findings of this dissertation include that open source projects have highly diverse security
measures, trust processes, and underlying motivations. That the projects’ security and trust needs are
likely best met in ways that consider their individual strengths, limitations, and project stage, especially
for smaller projects with limited access to resources. That open source components play an important
role in industry projects, and that those projects often have some form of company policy or best
practice for including external code, but developers wish for more resources to better audit included
components.
This dissertation emphasizes the importance of collaboration and shared responsibility in building and maintaining the open source software ecosystem, with developers, maintainers, end users,
researchers, and other stakeholders alike ensuring that the ecosystem remains a secure, trustworthy, and
healthy resource for everyone to rely on
Alter ego, state of the art on user profiling: an overview of the most relevant organisational and behavioural aspects regarding User Profiling.
This report gives an overview of the most relevant organisational and\ud
behavioural aspects regarding user profiling. It discusses not only the\ud
most important aims of user profiling from both an organisation’s as\ud
well as a user’s perspective, it will also discuss organisational motives\ud
and barriers for user profiling and the most important conditions for\ud
the success of user profiling. Finally recommendations are made and\ud
suggestions for further research are given
Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload
Explanations given by automation are often used to promote automation
adoption. However, it remains unclear whether explanations promote acceptance
of automated vehicles (AVs). In this study, we conducted a within-subject
experiment in a driving simulator with 32 participants, using four different
conditions. The four conditions included: (1) no explanation, (2) explanation
given before or (3) after the AV acted and (4) the option for the driver to
approve or disapprove the AV's action after hearing the explanation. We
examined four AV outcomes: trust, preference for AV, anxiety and mental
workload. Results suggest that explanations provided before an AV acted were
associated with higher trust in and preference for the AV, but there was no
difference in anxiety and workload. These results have important implications
for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection
Humans are the final decision makers in critical tasks that involve ethical
and legal concerns, ranging from recidivism prediction, to medical diagnosis,
to fighting against fake news. Although machine learning models can sometimes
achieve impressive performance in these tasks, these tasks are not amenable to
full automation. To realize the potential of machine learning for improving
human decisions, it is important to understand how assistance from machine
learning models affects human performance and human agency.
In this paper, we use deception detection as a testbed and investigate how we
can harness explanations and predictions of machine learning models to improve
human performance while retaining human agency. We propose a spectrum between
full human agency and full automation, and develop varying levels of machine
assistance along the spectrum that gradually increase the influence of machine
predictions. We find that without showing predicted labels, explanations alone
slightly improve human performance in the end task. In comparison, human
performance is greatly improved by showing predicted labels (>20% relative
improvement) and can be further improved by explicitly suggesting strong
machine performance. Interestingly, when predicted labels are shown,
explanations of machine predictions induce a similar level of accuracy as an
explicit statement of strong machine performance. Our results demonstrate a
tradeoff between human performance and human agency and show that explanations
of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo
available at https://deception.machineintheloop.co
- …