7,259 research outputs found

    Usability dimensions in collaborative GIS

    Get PDF
    Collaborative GIS requires careful consideration of the Human-Computer Interaction (HCI) and Usability aspects, given the variety of users that are expected to use these systems, and the need to ensure that users will find the system effective, efficient, and enjoyable. The chapter explains the link between collaborative GIS and usability engineering/HCI studies. The integration of usability considerations into collaborative GIS is demonstrated in two case studies of Web-based GIS implementation. In the first, the process of digitising an area on Web-based GIS is improved to enhance the user's experience, and to allow interaction over narrowband Internet connections. In the second, server-side rendering of 3D scenes allows users who are not equipped with powerful computers to request sophisticated visualisation without the need to download complex software. The chapter concludes by emphasising the need to understand the users' context and conditions within any collaborative GIS project. © 2006, Idea Group Inc

    In pursuit of rigour and accountability in participatory design

    Get PDF
    The field of Participatory Design (PD) has greatly diversified and we see a broad spectrum of approaches and methodologies emerging. However, to foster its role in designing future interactive technologies, a discussion about accountability and rigour across this spectrum is needed. Rejecting the traditional, positivistic framework, we take inspiration from related fields such as Design Research and Action Research to develop interpretations of these concepts that are rooted in PDŚłs own belief system. We argue that unlike in other fields, accountability and rigour are nuanced concepts that are delivered through debate, critique and reflection. A key prerequisite for having such debates is the availability of a language that allows designers, researchers and practitioners to construct solid arguments about the appropriateness of their stances, choices and judgements. To this end, we propose a “tool-to-think-with” that provides such a language by guiding designers, researchers and practitioners through a process of systematic reflection and critical analysis. The tool proposes four lenses to critically reflect on the nature of a PD effort: epistemology, values, stakeholders and outcomes. In a subsequent step, the coherence between the revealed features is analysed and shows whether they pull the project in the same direction or work against each other. Regardless of the flavour of PD, we argue that this coherence of features indicates the level of internal rigour of PD work and that the process of reflection and analysis provides the language to argue for it. We envision our tool to be useful at all stages of PD work: in the planning phase, as part of a reflective practice during the work, and as a means to construct knowledge and advance the field after the fact. We ground our theoretical discussions in a specific PD experience, the ECHOES project, to motivate the tool and to illustrate its workings

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    To whom to explain and what? : Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI)

    Get PDF
    Expectations towards artificial intelligence (AI) have risen continuously because of machine learning models’ evolution. However, the models’ decisions are often not intuitively understandable. For this reason, the field of Explainable AI (XAI) has emerged, which tries to create different techniques to help users understand AI better. As AI’s use spreads more broadly in society, it becomes like a co-worker that people need to understand. For this reason, AI-human interaction in research is of broad and current interest. This thesis outlines the current empirical XAI research literature themes from the human-computer interaction (HCI) perspective. This study's method is an explorative, systematic literature review carried out following the PRISMA (Preferred Research Items for Systematic Reviews) method. In total, 29 articles that concluded an empirical study into XAI from the HCI perspective were included in the review. The material was collected based on database searches and snowball sampling. The articles were analyzed based on their descriptive statistics, stakeholder groups, research questions, and theoretical approaches. This study aims to determine what factors made users consider XAI transparent, explainable, or trustworthy and to whom the XAI research was intended. Based on the analysis, three stakeholder groups to whom the current XAI literature was aimed for emerged: end-users, domain experts, and developers. This study’s findings show that domain experts’ needs towards XAI vary greatly between domains, whereas developers need better tools to create XAI systems. The end-users, on their part, considered case-based explanations unfair and wanted to have explanations that “speak their language”. Also, the results indicate that the effect of current XAI solutions on users’ trust towards AI systems is relatively small or even non-existing. The studies’ direct theoretical contributions and the number of theoretical lenses used were both found out to be relatively low. This thesis’s most immense contribution is to provide a synthesis of the extant empirical XAI literature from the HCI perspective, which previous studies have rarely brought together. Continuing this thesis, researchers can further investigate research avenues such as explanation quality methodologies, algorithm auditing methods, users’ mental models, and prior conceptions about AI.Odotukset tekoĂ€lyĂ€ kohtaan ovat kohonneet jatkuvasti koneoppimismallien kehittymisen vuoksi. Mallien tekemĂ€t pÀÀtökset eivĂ€t usein ole ihmiskĂ€yttĂ€jĂ€lle vaistonvaraisesti ymmĂ€rrettĂ€vissĂ€. TĂ€tĂ€ ongelmaa ratkomaan on syntynyt selittĂ€vĂ€n tekoĂ€lyn tutkimuskenttĂ€, joka luo erilaisia tekniikoita kĂ€yttĂ€jien ymmĂ€rryksen tueksi. Kun tekoĂ€lyn kĂ€yttö yhteiskunnassa yleistyy laajemmin, tulee siitĂ€ ikÀÀn kuin työkaveri, jota ihmisten tulee ymmĂ€rtÀÀ. TĂ€stĂ€ syystĂ€ tekoĂ€lyn ja ihmisen vĂ€lisen vuorovaikutuksen tutkiminen on nyt laajan mielenkiinnon kohteena. TĂ€ssĂ€ pro gradu -tutkielmassa hahmotellaan selittĂ€vĂ€n tekoĂ€lyn tutkimuskentĂ€n ajankohtaisia teemoja, ihmisen ja tietokoneen vĂ€lisen vuorovaikutuksen nĂ€kökulmasta. Tutkielman metodi on tutkiva, systemaattinen kirjallisuuskatsaus, ja se suoritettiin seuraten PRISMA-ohjeistusta. Katsaukseen valikoitui yhteensĂ€ 29 ihmisen ja tietokoneen vuorovaikutuksen nĂ€kökulmasta selittĂ€vÀÀ tekoĂ€lyĂ€ empiirisesti tutkinutta artikkelia. Aineisto kerĂ€ttiin tietokantahakujen ja lumipallo-otannan avulla. Tutkimuksia eriteltiin artikkeleja kuvailevien tietojen, niiden kohdeyleisön, tutkimuskysymysten sekĂ€ teoreettisten lĂ€hestymistapojen kautta. Tutkielman tarkoituksena on selvittÀÀ, millaiset tekijĂ€t saivat kĂ€yttĂ€jĂ€t pitĂ€mÀÀn tekoĂ€lyĂ€ lĂ€pinĂ€kyvĂ€nĂ€, selitettĂ€vissĂ€ olevana tai luotettavana, sekĂ€ kenelle aihepiirin tutkimus oli suunnattu. Analyysin perusteella löytyi kolme ryhmÀÀ, joille nykyistĂ€ kirjallisuutta on suunnattu: loppukĂ€yttĂ€jĂ€t, toimialojen asiantuntijat sekĂ€ tekoĂ€lyn kehittĂ€jĂ€t. Tutkielman tulokset osoittavat, ettĂ€ asiantuntijoiden tarpeet selittĂ€vÀÀ tekoĂ€lyĂ€ kohtaan vaihtelevat laajasti toimialojen vĂ€lillĂ€, kun taas sen kehittĂ€jĂ€t kaipaisivat parempia työkaluja tuekseen. LoppukĂ€yttĂ€jien havaittiin pitĂ€vĂ€n tekoĂ€lyn antamia tapauskohtaisia esimerkkejĂ€ epĂ€reiluina, ja haluavan juuri heitĂ€ puhuttelevia selityksiĂ€. Tulokset ilmaisevat, ettĂ€ nykyisten selittĂ€vien tekoĂ€lytekniikoiden vaikutukset kĂ€yttĂ€jien luottamukseen tekoĂ€lyĂ€ kohtaan ovat vĂ€hĂ€isiĂ€. Tutkimusten tieteellisen panosten ja niiden kĂ€yttĂ€mien teoreettisten nĂ€kökulmien mÀÀrĂ€n havaittiin olevan suhteellisen pieniĂ€. TĂ€mĂ€n tutkielman suurin tieteellinen panos on luoda yhteenveto empiiriseen, selittĂ€vĂ€n tekoĂ€lyn tutkimuskirjallisuuteen, ihmisen ja tietokoneen vĂ€lisen vuorovaikutuksen nĂ€kökulmasta. TĂ€tĂ€ nĂ€kökulmaa aiempi kirjallisuus on vain harvoin saattanut kokoon. Tutkielma avaa useita nĂ€kymiĂ€ jatkotutkimukselle, esimerkiksi selitysten laatumetodien, algoritmien auditointimenetelmien, kĂ€yttĂ€jien ajatusmallien sekĂ€ aiempien kĂ€sitysten vaikutusten nĂ€kökulmista

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    Manual for starch gel electrophoresis: A method for the detection of genetic variation

    Get PDF
    The procedure to conduct horizontal starch gel electrophoresis on enzymes is described in detail. Areas covered are (I) collection and storage of specimens, (2) preparation of tissues, (3) preparation of a starch gel, (4) application of enzyme extracts to a gel, (5) setting up a gel for electrophoresis, (6) slicing a gel, and (7) staining a gel. Recipes are also included for 47 enzyme stains and 3 selected gel buffers. (PDF file contains 26 pages.

    Who Wants to Revise Privatization and Why? Evidence from 28 Post-Communist Countries

    Get PDF
    A 2006 survey of 28,000 individuals in 28 post-communist countries reveals overwhelming public support for the revision of privatization in the region. A majority of respondents, however, favors a revision of privatization that ultimately leaves firms in private hands. We identify which factors influence individuals’ support for revising privatization and explore whether respondents’ views are driven by a preference for state property or a concern for the fairness of privatization. We find that human capital poorly suited for a market economy with private ownership and a lack of privately owned assets increase support for revising privatization with the primary reason being a preference for state over private property. Economic hardships during transition and work in the state sector also increase support for revising privatization, but mainly due to the perceived unfairness of privatization. The effects of human capital and asset ownership on support for revising privatization are independent of a countries’ institutional environment. In contrast, good governance institutions amplify the impact of positive and negative transition experiences on attitudes toward revising privatization. In countries with low inequality,individuals with positive and negative transition experiences hold significantly different views about the superiority of private property, but this difference is much smaller in countries with high inequality.privatization, revision, nationalization, property rights, demand for property rights, legitimacy of property rights, transition

    Interactive tools for reproducible science

    Get PDF
    Reproducibility should be a cornerstone of science. It plays an essential role in research validation and reuse. In recent years, the scientific community and the general public became increasingly aware of the reproducibility crisis, i.e. the wide-spread inability of researchers to reproduce published work, including their own. The reproducibility crisis has been identified in most branches of data-driven science. The effort required to document, clean, preserve, and share experimental resources has been described as one of the core contributors to this irreproducibility challenge. Documentation, preservation, and sharing are key reproducible research practices that are of little perceived value for scientists, as they fall outside the traditional academic reputation economy that is focused on novelty-driven scientific contributions. Scientific research is increasingly focused on the creation, observation, processing, and analysis of large data volumes. On one hand, this transition towards computational and data-intensive science poses new challenges for research reproducibility and reuse. On the other hand, increased availability and advances in computation and web technologies offer new opportunities to address the reproducibility crisis. A prominent example is the World Wide Web (WWW), which was developed in response to researchers’ needs to quickly share research data and findings with the scientific community. The WWW was invented at the European Organization for Nuclear Research (CERN). CERN is a key laboratory in High Energy Physics (HEP), one of the most data-intensive scientific domains. This thesis reports on research connected in the context of CAP, a Research Data Management (RDM) service tailored to CERN's major experiments. We use this scientific environment to study the role and requirements of interactive tools in facilitating reproducible research. In this thesis, we build a wider understanding of researchers' interactions with tools that support research documentation, preservation, and sharing. From an HCI perspective the following aspects are fundamental: (1) Characterize and map requirements and practices around research preservation and reuse. (2) Understand the wider role and impact of RDM tools in scientific workflows. (3) Design tools and interactions that promote, motivate, and acknowledge reproducible research practices. Research reported in this thesis represents the first systematic application of HCI methods in the study and design of interactive tools for reproducible science. We have built an empirical understanding of reproducible research practices and the role of supportive tools through research in HEP and across a variety of scientific fields. We designed prototypes and implemented services that aim to create rewarding and motivating interactions. We conducted mixed-method evaluations to assess the UX of the designs, in particular related to usefulness, suitability, and persuasiveness. We report on four empirical studies in which 42 researchers and data managers participated. In the first interview study, we asked HEP data analysts about RDM practices and invited them to explore and discuss CAP. Our findings show that tailored preservation services allow for introducing and promoting meaningful rewards and incentives that benefit contributors in their research work. Here, we introduce the term secondary usage forms of RDM tools. While not part of the core mission of the tools, secondary usage forms motivate contributions through meaningful rewards. We extended this research through a cross-domain interview study with data analysts and data stewards from a diverse set of scientific fields. Based on the findings of this cross-domain study, we contribute a Stage-Based Model of Personal RDM Commitment Evolution that explains how and why scientists commit to open and reproducible science. To address the motivation challenge, we explored if and how gamification can motivate contributions and promote reproducible research practices. To this end, we designed two prototypes of a gamified preservation service that was inspired by CAP. Each gamification prototype makes use of different underlying mechanisms. HEP researchers found both implementations valuable, enjoyable, suitable, and persuasive. The gamification layer improves visibility of scientists and research work and facilitates content navigation and discovery. Based on these findings, we implemented six tailored science badges in CAP in our second gamification study. The badges promote and reward high-quality documentation and special uses of preserved research. Findings from our evaluation with HEP researchers show that tailored science badges enable novel forms of research repository navigation and content discovery that benefit users and contributors. We discuss how the use of tailored science badges as an incentivizing element paves new ways for interaction with research repositories. Finally, we describe the role of HCI in supporting reproducible research practices. We stress that tailored RDM tools can improve content navigation and discovery, which is key in the design of secondary usage forms. Moreover, we argue that incentivizing elements like gamification may not only motivate contributions, but further promote secondary uses and enable new forms of interaction with preserved research. Based on our empirical research, we describe the roles of both HCI scholars and practitioners in building interactive tools for reproducible science. Finally, we outline our vision to transform computational and data-driven research preservation through ubiquitous preservation strategies that integrate into research workflows and make use of automated knowledge recording. In conclusion, this thesis advocates the unique role of HCI in supporting, motivating, and transforming reproducible research practices through the design of tools that enable effective RDM. We present practices around research preservation and reuse in HEP and beyond. Our research paves new ways for interaction with RDM tools that support and motivate reproducible science.Reproduzierbarkeit sollte ein wissenschaftlicher Grundpfeiler sein, da sie einen essenziellen Bestandteil in der Validierung und Nachnutzung von Forschungsarbeiten darstellt. VerfĂŒgbarkeit und VollstĂ€ndigkeit von Forschungsmaterialien sind wichtige Voraussetzungen fĂŒr die Interaktion mit experimentellen Arbeiten. Diese Voraussetzungen sind jedoch oft nicht gegeben. Zuletzt zeigten sich die Wissenschaftsgemeinde und die Öffentlichkeit besorgt ĂŒber die Reproduzierbarkeitskrise in der empirischen Forschung. Diese Krise bezieht sich auf die Feststellung, dass Forscher oftmals nicht in der Lage sind, veröffentlichte Forschungsergebnisse zu validieren oder nachzunutzen. TatsĂ€chlich wurde die Reproduzierbarkeitskrise in den meisten Wissenschaftsfeldern beschrieben. Eine der Hauptursachen liegt in dem Aufwand, der benötigt wird, um Forschungsmaterialien zu dokumentieren, vorzubereiten und zu teilen. Wissenschaftler empfinden diese Forschungspraktiken oftmals als unattraktiv, da sie außerhalb der traditionellen wissenschaftlichen Belohnungsstruktur liegen. Diese ist zumeist ausgelegt auf das Veröffentlichen neuer Forschungsergebnisse. Wissenschaftliche Forschung basiert zunehmend auf der Verarbeitung und Analyse großer DatensĂ€tze. Dieser Übergang zur rechnergestĂŒtzten und daten-intensiven Forschung stellt neue Herausforderungen an Reproduzierbarkeit und Forschungsnachnutzung. Die weite Verbreitung des Internets bietet jedoch ebenso neue Möglichkeiten, Reproduzierbarkeit in der Forschung zu ermöglichen. Die Entwicklung des World Wide Web (WWW) stellt hierfĂŒr ein sehr gutes Beispiel dar. Das WWW wurde in der EuropĂ€ischen Organisation fĂŒr Kernforschung (CERN) entwickelt, um Forschern den weltweiten Austausch von Daten zu ermöglichen. CERN ist eine der wichtigsten Großforschungseinrichtungen in der Teilchenphysik, welche zu den daten-intensivsten Forschungsbereichen gehört. In dieser Arbeit berichten wir ĂŒber unsere Forschung, die sich auf CERN Analysis Preservation (CAP) fokussiert. CAP ist ein Forschungsdatenmanagement-Service (FDM-Service), zugeschnitten auf die grĂ¶ĂŸten Experimente von CERN. In dieser Arbeit entwickeln und kommunizieren wir ein erweitertes VerstĂ€ndnis der Interaktion von Forschern mit FDM-Infrastruktur. Aus Sicht der Mensch-Computer-Interaktion (MCI) sind folgende Aspekte fundamental: (1) Das Bestimmen von Voraussetzungen und Praktiken rund um FDM und Nachnutzung. (2) Das Entwickeln von VerstĂ€ndnis fĂŒr die Rolle und Auswirkungen von FDM-Systemen in der wissenschaftlichen Arbeit. (3) Das Entwerfen von Systemen, die Praktiken unterstĂŒtzen, motivieren und anerkennen, welche die Reproduzierbarkeit von Forschung vorantreiben. Die Forschung, die wir in dieser Arbeit beschreiben, stellt die erste systematische Anwendung von MCI-Methoden in der Entwicklung von FDM-Systemen fĂŒr Forschungsreproduzierbarkeit dar. Wir entwickeln ein empirisches VerstĂ€ndnis von Forschungspraktiken und der Rolle von unterstĂŒtzenden Systemen durch ĂŒberwiegend qualitative Forschung in Teilchenphysik und darĂŒber hinaus. Des Weiteren entwerfen und implementieren wir Prototypen und Systeme mit dem Ziel, Wissenschaftler fĂŒr FDM zu motivieren und zu belohnen. Wir verfolgten einen Mixed-Method-Ansatz in der Evaluierung der Nutzererfahrung bezĂŒglich unserer Prototypen und Implementierungen. Wir berichten von vier empirischen Studien, in denen insgesamt 42 Forscher und Forschungsdaten-Manager teilgenommen haben. In unserer ersten Interview-Studie haben wir Teilchenphysiker ĂŒber FDM-Praktiken befragt und sie eingeladen, CAP zu nutzen und ĂŒber den Service zu diskutieren. Unsere Ergebnisse zeigen, dass die mensch-zentrierte Studie von speziell angepassten FDM-Systemen eine besondere Blickweise auf das Entwerfen von Anreizen und bedeutungsvollen Belohnungen ermöglicht. Wir fĂŒhren den Begriff secondary usage forms (Zweitnutzungsformen) in Bezug auf FDM-Infrastruktur ein. Hierbei handelt es sich um Nutzungsformen, die Forschern sinnvolle Anreize bieten, ihre Arbeiten zu dokumentieren und zu teilen. Basierend auf unseren Ergebnissen in der Teilchenphysik haben wir unseren Forschungsansatz daraufhin auf Wissenschaftler und Forschungsdatenmanager aus einer Vielzahl verschiedener und diverser Wissenschaftsfelder erweitert. In Bezug auf die Ergebnisse dieser Studie beschreiben wir ein zustandsbasiertes Modell ĂŒber die Entwicklung individueller Selbstverpflichtung zu FDM. Wir erwarten, dass dieses Modell designorientierte Denk- und MethodenansĂ€tze in der kĂŒnftigen Implementierung und Evaluation von FDM-Infrastruktur beeinflussen wird. Des Weiteren haben wir einen Forschungsansatz zu Spielifizierung (Gamification) verfolgt, in dem wir untersucht haben, ob und wie Spielelemente FDM-Praktiken motivieren können. ZunĂ€chst haben wir zwei Prototypen eines spielifizierten FDM-Tools entwickelt, welche sich an CAP orientieren. Obwohl die beiden Prototypen auf sehr unterschiedlichen Entwurfskonzepten beruhen, fanden Teilchenphysiker beide angemessen und motivierend. Die Studienteilnehmer diskutierten insbesondere verbesserte Sichtbarkeit individueller Forscher und wissenschaftlicher Arbeiten. Basierend auf den Ergebnissen dieser ersten Studie zu Spielifizierung in FDM haben wir im nĂ€chsten Schritt sechs speziell zugeschnittene Forschungs-Abzeichen (tailored science badges) in CAP implementiert. Die Abzeichen bewerben das ausfĂŒhrliche Dokumentieren sowie besondere Nutzen der auf dem Service zugĂ€nglichen Forschungsarbeiten. Die Ergebnisse unserer Evaluierung mit Teilchenphysikern zeigen, dass die speziell zugeschnittenen Forschungs-Abzeichen neue und effektivere Möglichkeiten bieten, Forschungsmaterialien systematisch zu durchsuchen und zu entdecken. Hierdurch profitieren sowohl Nutzer als auch Forschungsdaten-Beisteuernde. Basierend auf den Ergebnissen diskutieren wir, wie die Forschungs-Abzeichen neue Formen der Interaktion mit großen Forschungsrepositorien ermöglichen. Zum Schluss heben wir die besondere Rolle von MCI in der Entwicklung unterstĂŒtzender FDM-Infrastruktur hervor. Wir betonen, dass speziell an Forschungspraktiken angepasste Systeme neue AnsĂ€tze in der Interaktion mit wissenschaftlichen Arbeiten ermöglichen. Wir beschreiben zwei Modelle und unsere Erwartung, wie MCI die Entwicklung kĂŒnftiger FDM-Systeme nachhaltig beeinflussen kann. In diesem Zusammenhang prĂ€sentieren wir auch unsere Vision zu ubiquitĂ€ren Strategien, die zum Ziel hat, Forschungsprozesse und Wissen systematisch festzuhalten
    • 

    corecore