8 research outputs found
Differentiation of online text-based advertising and the effect on users' click behavior
Online syndicated text-based advertising is ubiquitous on news sites, blogs, personal websites, and on search result pages. Until recently, a common distinguishing feature of these text-based advertisements has been their background color. Following intervention by the Federal Trade Commission (FTC), the format of these advertisements has undergone a subtle change in their design and presentation. Using three empirical experiments, we investigate the effect of industry-standard advertising practices on click rates, and demonstrate changes in user behavior when this familiar differentiator is modified. Using three large-scale experiments (N1 = 101, N2 = 84, N3 = 176) we find that displaying advertisement and content results with a differentiated background results in significantly lower click rates. Our results demonstrate the strong link between background color differentiation and advertising, and reveal how alternative differentiation techniques influence user behavior.This work was supported by a studentship from the Engineering and Physical Sciences Research Council.This is the final published version. It first appeared at http://www.sciencedirect.com/science/article/pii/S0747563215003180#. Additional data related to this publication is available at the University of Cambridge data repository: http://www.repository.cam.ac.uk/handle/1810/247391
LIMEADE: A General Framework for Explanation-Based Human Tuning of Opaque Machine Learners
Research in human-centered AI has shown the benefits of systems that can
explain their predictions. Methods that allow humans to tune a model in
response to the explanations are similarly useful. While both capabilities are
well-developed for transparent learning models (e.g., linear models and GA2Ms),
and recent techniques (e.g., LIME and SHAP) can generate explanations for
opaque models, no method for tuning opaque models in response to explanations
has been user-tested to date. This paper introduces LIMEADE, a general
framework for tuning an arbitrary machine learning model based on an
explanation of the model's prediction. We demonstrate the generality of our
approach with two case studies. First, we successfully utilize LIMEADE for the
human tuning of opaque image classifiers. Second, we apply our framework to a
neural recommender system for scientific papers on a public website and report
on a user study showing that our framework leads to significantly higher
perceived user control, trust, and satisfaction. Analyzing 300 user logs from
our publicly-deployed website, we uncover a tradeoff between canonical greedy
explanations and diverse explanations that better facilitate human tuning.Comment: 16 pages, 7 figure
Recommended from our members
Power to the People: The Role of Humans in Interactive Machine Learning
Systems that can learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that demonstrate how interactivity results in a tight coupling between the system and the user, exemplify ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. After giving a glimpse of the progress that has been made thus far, we discuss some of the challenges we face in moving the field forward.This is an author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by the American Association for Artificial Intelligence and can be found at: http://www.aaai.org/Magazine/magazine.php
Recommended from our members
A Review of User Interface Design for Interactive Machine Learning
Interactive Machine Learning (IML) seeks to complement human perception and intelligence by tightly integrating these strengths with the computational power and speed of computers. The interactive process is designed to involve input from the user but does not require the background knowledge or experience that might be necessary to work with more traditional machine learning techniques. Under the IML process, non-experts can apply their domain knowledge and insight over otherwise unwieldy datasets to find patterns of interest or develop complex data driven applications. This process is co-adaptive in nature and relies on careful management of the interaction between human and machine. User interface design is fundamental to the success of this approach, yet there is a lack of consolidated principles on how such an interface should be implemented. This article presents a detailed review and characterisation of Interactive Machine Learning from an interactive systems perspective. We propose and describe a structural and behavioural model of a generalised IML system and identify solution principles for building effective interfaces for IML. Where possible, these emergent solution principles are contextualised by reference to the broader human-computer interaction literature. Finally, we identify strands of user interface research key to unlocking more efficient and productive non-expert interactive machine learning applications
Interactive Natural Language Processing for Clinical Text
Free-text allows clinicians to capture rich information about patients in narratives and first-person stories. Care providers are likely to continue using free-text in Electronic Medical Records (EMRs) for the foreseeable future due to convenience and utility offered. However, this complicates information extraction tasks for big-data applications. Despite advances in Natural Language Processing (NLP) techniques, building models on clinical text is often expensive and time-consuming. Current approaches require a long collaboration between clinicians and data-scientists. Clinicians provide annotations and training data, while data-scientists build the models. With the current approaches, the domain experts - clinicians and clinical researchers - do not have provisions to inspect these models or give direct feedback. This forms a barrier to NLP adoption and limits its power and utility for real-world clinical applications.
Interactive learning systems may allow clinicians without machine learning experience to build NLP models on their own. Interactive methods are particularly attractive for clinical text due to the diversity of tasks that need customized training data. Interactivity could enable end-users (clinicians) to review model outputs and provide feedback for model revisions within an closed feedback loop. This approach may make it feasible to extract understanding from unstructured text in patient records; classifying documents against clinical concepts, summarizing records and other sophisticated NLP tasks while reducing the need for prior annotations and training data upfront.
In my dissertation, I demonstrate this approach by building and evaluating prototype systems for both clinical care and research applications. I built NLPReViz as an interactive tool for clinicians to train and build binary NLP models on their own for retrospective review of colonoscopy procedure notes. Next, I extended this effort to design an intelligent signout tool to identify incidental findings in a clinical care setting. I followed a two-step evaluation with clinicians as study participants: a usability evaluation to demonstrate feasibility and overall usefulness of the tool, followed by an empirical evaluation to evaluate model correctness and utility. Lessons learned from the development and evaluation of these prototypes will provide insight into the generalized design of interactive NLP systems for wider clinical applications