8 research outputs found

    Differentiation of online text-based advertising and the effect on users' click behavior

    Get PDF
    Online syndicated text-based advertising is ubiquitous on news sites, blogs, personal websites, and on search result pages. Until recently, a common distinguishing feature of these text-based advertisements has been their background color. Following intervention by the Federal Trade Commission (FTC), the format of these advertisements has undergone a subtle change in their design and presentation. Using three empirical experiments, we investigate the effect of industry-standard advertising practices on click rates, and demonstrate changes in user behavior when this familiar differentiator is modified. Using three large-scale experiments (N1 = 101, N2 = 84, N3 = 176) we find that displaying advertisement and content results with a differentiated background results in significantly lower click rates. Our results demonstrate the strong link between background color differentiation and advertising, and reveal how alternative differentiation techniques influence user behavior.This work was supported by a studentship from the Engineering and Physical Sciences Research Council.This is the final published version. It first appeared at http://www.sciencedirect.com/science/article/pii/S0747563215003180#. Additional data related to this publication is available at the University of Cambridge data repository: http://www.repository.cam.ac.uk/handle/1810/247391

    LIMEADE: A General Framework for Explanation-Based Human Tuning of Opaque Machine Learners

    Full text link
    Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow humans to tune a model in response to the explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA2Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, no method for tuning opaque models in response to explanations has been user-tested to date. This paper introduces LIMEADE, a general framework for tuning an arbitrary machine learning model based on an explanation of the model's prediction. We demonstrate the generality of our approach with two case studies. First, we successfully utilize LIMEADE for the human tuning of opaque image classifiers. Second, we apply our framework to a neural recommender system for scientific papers on a public website and report on a user study showing that our framework leads to significantly higher perceived user control, trust, and satisfaction. Analyzing 300 user logs from our publicly-deployed website, we uncover a tradeoff between canonical greedy explanations and diverse explanations that better facilitate human tuning.Comment: 16 pages, 7 figure

    Interactive Natural Language Processing for Clinical Text

    Get PDF
    Free-text allows clinicians to capture rich information about patients in narratives and first-person stories. Care providers are likely to continue using free-text in Electronic Medical Records (EMRs) for the foreseeable future due to convenience and utility offered. However, this complicates information extraction tasks for big-data applications. Despite advances in Natural Language Processing (NLP) techniques, building models on clinical text is often expensive and time-consuming. Current approaches require a long collaboration between clinicians and data-scientists. Clinicians provide annotations and training data, while data-scientists build the models. With the current approaches, the domain experts - clinicians and clinical researchers - do not have provisions to inspect these models or give direct feedback. This forms a barrier to NLP adoption and limits its power and utility for real-world clinical applications. Interactive learning systems may allow clinicians without machine learning experience to build NLP models on their own. Interactive methods are particularly attractive for clinical text due to the diversity of tasks that need customized training data. Interactivity could enable end-users (clinicians) to review model outputs and provide feedback for model revisions within an closed feedback loop. This approach may make it feasible to extract understanding from unstructured text in patient records; classifying documents against clinical concepts, summarizing records and other sophisticated NLP tasks while reducing the need for prior annotations and training data upfront. In my dissertation, I demonstrate this approach by building and evaluating prototype systems for both clinical care and research applications. I built NLPReViz as an interactive tool for clinicians to train and build binary NLP models on their own for retrospective review of colonoscopy procedure notes. Next, I extended this effort to design an intelligent signout tool to identify incidental findings in a clinical care setting. I followed a two-step evaluation with clinicians as study participants: a usability evaluation to demonstrate feasibility and overall usefulness of the tool, followed by an empirical evaluation to evaluate model correctness and utility. Lessons learned from the development and evaluation of these prototypes will provide insight into the generalized design of interactive NLP systems for wider clinical applications
    corecore