21 research outputs found

    ADLib: An Arduino Communication Framework for Ambient Displays

    Get PDF
    As computers become more and more a part of our everyday lives, the need to change the way in which people interact with them is also evolving. Ambient displays provide an effective way to move computers away from our main focus and into the periphery. ADLib is a small communication framework that aims to simplify the construction of ambient displays built using the Arduino prototyping platform. The ADLib framework provides an easy-to-use library for communicating with an Arduino, allowing the user to focus on the construction and development of the display. The framework consists of three main components: A protocol for encoding information to be sent from a host computer to the Arduino An Arduino library for receiving and parsing incoming data A desktop application for sending data to the Arduin

    Communicating uncertain information from deep learning models to users

    Get PDF
    “The use of Artificial Intelligence (AI) decision support systems is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with domain knowledge. I conducted two human-subject experiments to examine the effects of uncertainty information with AI recommendations. The experimental stimuli are from an existing image recognition deep learning model, one popular approach to AI. In Paper I, I evaluated the effect of the number of AI recommendations and provision of uncertainty information. For a series of images, participants identified the subject and rated their confidence level. Results suggest that AI recommendations, especially multiple, increased accuracy and confidence. However, uncertainty information, which was represented visually with bars, did not significantly improve participants\u27 performance. In Paper II, I tested the effect of AI recommendations in a within-subject comparison and the effect of more salient uncertainty information in a between-subject comparison in the context of varying domain knowledge. The uncertainty information combined both numerical (percent) and visual (color-coded bar) formats to make the information easier to interpret and more noticeable. Consistent with Paper I, results suggest that AI recommendations improved participants’ accuracy and confidence. In addition, the more salient uncertainty information significantly increased accuracy, but not confidence. Based on a subjective measure of domain knowledge, participants had higher domain knowledge for animals. In general, AI recommendations and uncertainty information had less of an effect as domain knowledge increased. Results suggest that uncertainty information, can improve accuracy and potentially decrease over-confidence”--Abstract, page iv

    Incorporating Trust into Context-Aware Services

    Get PDF
    Enabling technologies concerning hardware, networking, and sensing have inspired the development of context-aware IT services. These adapt to the situation of the user, such that service provisioning is specific to his/her corresponding needs. We have seen successful applications of context-aware services in healthcare, well-being, and smart homes. It is, however, always a question what level of trust the users can place in the fulfillment of their needs by a certain IT-service. Trust has two major variants: policy-based, where a reputed institution provides guarantees about the service, and reputation-based, where other users of the service provide insight into the level of fulfillment of user needs. Services that are accessible to a small and known set of users typically use policy-based trust only. Services that have a wide community of users can use reputation-based trust, policy-based trust, or a combination. For both types of trust, however, context awareness poses a problem. Policy-based trust works within certain boundaries, outside of which no guarantees can be given about satisfying the user needs, and context awareness can push a service out of these boundaries. For reputation-based trust, the fact that users in a certain context were adequately served, does not mean that the same would happen when the service adapts to another user’s needs. In this paper we consider the incorporation of trust into context-aware services, by proposing an ontological conceptualization for user-system trust. Analyzing service usage data for context parameters combined with the ability to fulfill user needs can help in eliciting components for the ontology.</p

    'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions

    Full text link
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing Systems (CHI'18), April 21--26, Montreal, Canad

    Investigating intelligibility for uncertain context-aware applications

    Full text link

    Understanding Blind People&apos;s Experiences with Computer-Generated Captions of Social Media Images

    Get PDF
    ABSTRACT Research advancements allow computational systems to automatically caption social media images. Often, these captions are evaluated with sighted humans using the image as a reference. Here, we explore how blind and visually impaired people experience these captions in two studies about social media images. Using a contextual inquiry approach (n=6 blind/visually impaired), we found that blind people place a lot of trust in automatically generated captions, filling in details to resolve differences between an image&apos;s context and an incongruent caption. We built on this in-person study with a second, larger online experiment (n=100 blind/visually impaired) to investigate the role of phrasing in encouraging trust or skepticism in captions. We found that captions emphasizing the probability of error, rather than correctness, encouraged people to attribute incongruence to an incorrect caption, rather than missing details. Where existing research has focused on encouraging trust in intelligent systems, we conclude by challenging this assumption and consider the benefits of encouraging appropriate skepticism

    Impact of Indoor Location Information Reliability on Users’ Trust of an Indoor Positioning System

    Get PDF
    Indoor positioning systems have been used as a supplement to provide positioning in settings where GPS does not function. However, the accuracy of calculated results varies among techniques and algorithms used; system performance also differs across testing environments. As a result, users’ responses to and opinions of these positioning results could be different. Furthermore, user trust, most closely associated with their confidence in the system, will also vary. A relatively little studied topic is the effect of positioning variance on a user’s opinion or trust of such systems (GPS as well, for that matter). Therefore, understanding how user interaction with such systems (through trust) changes is important for achieving more usable positioning system design. An experiment was designed to examine if the sequence of location accuracy will affect users’ trust in an individual episode positioning result as well as the system overall. The simulated positioning system running on an iPad used for this experiment provides 10 priming positioning results at a specific category of accuracy. The accuracy is controlled and is presented as either 1. ACCURATE (within 5 meters of actual location), 2. INACCURATE (greater 15 meters), 0r 3. WRONG BUILDING (outside current building’s footprint). After one set of these priming locations a series of 55 post-priming locations across the same categories in addition to 10 CONTINUOUS locations (with between 6 and 15 meters of error) were presented. At each experimental site participants located themselves using the simulated system and rated their trust for that location. Variables obtained from the experiment include: 1. Two types of trust at each location (positioning trust and system trust); 2. Spatial abilities, sense of direction, and ancillary survey data (user characteristics). Results show that users’ trust varies among different accuracy categories and changes over time according to the system performance in association with their own characteristics. Specifically, the accuracy of the priming locations has an impact on users’ trust of later results. Besides, users’ trust in individual positioning results is quite variable and the variability is closely related to accuracy, while user trust of the overall system is less variable
    corecore