16 research outputs found

    Electronic data safes as an infrastructure for transformational government? A case study

    Full text link
    This article introduces and explores the potential of an active electronic data safe (AEDS) serving as an infrastructure to achieve transformational government. An AEDS connects individuals and organizations from the private and the public sector to exchange information items related to business processes following the user-managed access paradigm. To realize the transformational government’s vision of user-centricity, fundamental changes in the service provision and collaboration of public and private sector organizations are needed. Findings of a user study with a prototype of an AEDS are used to identify four barriers for the adoption of an AEDS in the light of transformational government: (1.) offering citizens unfamiliar services having the character of experience-goods; (2.) failing to fulfill common service expectations of the customers; (3.) failing to establish contextual integrity for data sharing, and, (4.) failing to establish and run an AEDS as a multi-sided platform providing an attractive business model

    Calendar.help: Designing a Workflow-Based Scheduling Agent with Humans in the Loop

    Full text link
    Although information workers may complain about meetings, they are an essential part of their work life. Consequently, busy people spend a significant amount of time scheduling meetings. We present Calendar.help, a system that provides fast, efficient scheduling through structured workflows. Users interact with the system via email, delegating their scheduling needs to the system as if it were a human personal assistant. Common scheduling scenarios are broken down using well-defined workflows and completed as a series of microtasks that are automated when possible and executed by a human otherwise. Unusual scenarios fall back to a trained human assistant who executes them as unstructured macrotasks. We describe the iterative approach we used to develop Calendar.help, and share the lessons learned from scheduling thousands of meetings during a year of real-world deployments. Our findings provide insight into how complex information tasks can be broken down into repeatable components that can be executed efficiently to improve productivity.Comment: 10 page

    Leveraging Mixed Expertise in Crowdsourcing.

    Full text link
    Crowdsourcing systems promise to leverage the "wisdom of crowds" to help solve many kinds of problems that are difficult to solve using only computers. Although a crowd of people inherently represents a diversity of skill levels, knowledge, and opinions, crowdsourcing system designers typically view this diversity as noise and effectively cancel it out by aggregating responses. However, we believe that by embracing crowd workers' diverse expertise levels, system designers can better leverage that knowledge to increase the wisdom of crowds. In this thesis, we propose solutions to a limitation of current crowdsourcing approaches: not accounting for a range of expertise levels in the crowd. The current body of work in crowdsourcing does not systematically examine this, suggesting that researchers may not believe the benefits of using mixed expertise warrants the complexities of supporting it. This thesis presents two systems, Escalier and Kurator, to show that leveraging mixed expertise is a worthwhile endeavor because it materially benefits system performance, at scale, for various types of problems. We also demonstrate an effective technique, called expertise layering, to incorporate mixed expertise into crowdsourcing systems. Finally, we show that leveraging mixed expertise enables researchers to use crowdsourcing to address new types of problems.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133307/1/afdavid_1.pd

    Wikum: Bridging Discussion Forums and Wikis Using Recursive Summarization

    Get PDF
    Large-scale discussions between many participants abound on the internet today, on topics ranging from political arguments to group coordination. But as these discussions grow to tens of thousands of posts, they become ever more difficult for a reader to digest. In this article, we describe a workflow called recursive summarization, implemented in our Wikum prototype, that enables a large population of readers or editors to work in small doses to refine out the main points of the discussion. More than just a single summary, our workflow produces a summary tree that enables a reader to explore distinct subtopics at multiple levels of detail based on their interests. We describe lab evaluations showing that (i) Wikum can be used more effectively than a control to quickly construct a summary tree and (ii) the summary tree is more effective than the original discussion in helping readers identify and explore the main topics

    Systems for Managing Work-Related Transitions

    Get PDF
    Peoples' work lives have become ever-populated with transitions across tasks, devices, and environments. Despite their ubiquitous nature, managing transitions across these three domains has remained a significant challenge. Current systems and interfaces for managing transitions have explored approaches that allow users to track work-related information or automatically capture or infer context, but do little to support user autonomy at its fullest. In this dissertation, we present three studies that support the goal of designing and understanding systems for managing work-related transitions. Our inquiry is motivated by the notion that people lack the ability to continue or discontinue their work at the level they wish to do so. We scope our research to information work settings, and we use our three studies to generate novel insights about how empowering peoples' ability to engage with their work can mitigate the challenges of managing work-related transitions. We first introduce and study Mercury, a system that mitigates programmers' challenges in transitioning across devices and environments by enabling their ability to continue work on-the-go. Mercury orchestrates programmers' work practices by providing them with a series of auto-generated microtasks on their mobile device based on the current state of their source code. Tasks in Mercury are designed so that they can be completed quickly without the need for additional context, making them suitable to address during brief moments of downtime. When users complete microtasks on-the-go, Mercury calculates file changes and integrates them into the user's codebase to support task resumption. We then introduce SwitchBot, a conversational system that mitigates the challenges in discontinuing work during the transition between home and the workplace. SwitchBot's design philosophy is centered on assisting information workers in detaching from and reattaching with their work through brief conversations before the start and end of the workday. By design, SwitchBot's detachment and reattachment dialogues inquire about users' task-related goals or user's emotion-related goals. We evaluated SwitchBot with an emphasis on understanding how the system and its two dialogues uniquely affected information workers' ability to detach from and later reattach with their work. Following our study of Mercury and SwitchBot, we present findings from an interview study with crowdworkers aimed at understanding the work-related transitions they experience in their work practice from the perspective of tools. We characterize the tooling observed in crowdworkers' work practices and identified three types of "fragmentation" that are motivated by tooling in the practice. Our study highlights several distinctions between traditional and contemporary information work settings and lays a foundation for future systems that aid next-generation information workers in managing work-related transitions. We conclude by outlining this dissertation's contributions and future research directions

    Economic indicators used for EU projects, in other criteria of aggregation than national / regional

    Get PDF
    Economical and social indicators are created and published for national and regional dimensions. Nowadays, both local and territorial indicators are really able to define more adequate the stage of social and economical development and to illustrate the impact of European programs and projects in fields like: long lasting development, entrepreneurial development, scientific research development and strategies, education and learning resources, IT resources, dissemination of European culture etc. If in the first part, there is only quantitative information, offered by our National Institute of Statistics (NIS), in the following few examples of some useful economical and social indicators provide a dynamic vision in defining objectives, methods and implementation Thus the need for a quantitative framework of local and territorial indicators demands for an original statistical methodology.gross domestic product, indicators in macro, mezo and micro economics, weight of selected, factors, representative methodology

    Leveraging human-computer interaction and crowdsourcing for scholarly knowledge graph creation

    Get PDF
    The number of scholarly publications continues to grow each year, as well as the number of journals and active researchers. Therefore, methods and tools to organize scholarly knowledge are becoming increasingly important. Without such tools, it becomes increasingly difficult to conduct research in an efficient and effective manner. One of the fundamental issues scholarly communication is facing relates to the format in which the knowledge is shared. Scholarly communication relies primarily on narrative document-based formats that are specifically designed for human consumption. Machines cannot easily access and interpret such knowledge, leaving machines unable to provide powerful tools to organize scholarly knowledge effectively. In this thesis, we propose to leverage knowledge graphs to represent, curate, and use scholarly knowledge. The systematic knowledge representation leads to machine-actionable knowledge, which enables machines to process scholarly knowledge with minimal human intervention. To generate and curate the knowledge graph, we propose a machine learning assisted crowdsourcing approach, in particular Natural Language Processing (NLP). Currently, NLP techniques are not able to satisfactorily extract high-quality scholarly knowledge in an autonomous manner. With our proposed approach, we intertwine human and machine intelligence, thus exploiting the strengths of both approaches. First, we discuss structured scholarly knowledge, where we present the Open Research Knowledge Graph (ORKG). Specifically, we focus on the design and development of the ORKG user interface (i.e., the frontend). One of the key challenges is to provide an interface that is powerful enough to create rich knowledge descriptions yet intuitive enough for researchers without a technical background to create such descriptions. The ORKG serves as the technical foundation for the rest of the work. Second, we focus on comparable scholarly knowledge, where we introduce the concept of ORKG comparisons. ORKG comparisons provide machine-actionable overviews of related literature in a tabular form. Also, we present a methodology to leverage existing literature reviews to populate ORKG comparisons via a human-in-the-loop approach. Additionally, we show how ORKG comparisons can be used to form ORKG SmartReviews. The SmartReviews provide dynamic literature reviews in the form of living documents. They are an attempt address the main weaknesses of the current literature review practice and outline how the future of review publishing can look like. Third, we focus designing suitable tasks to generate scholarly knowledge in a crowdsourced setting. We present an intelligent user interface that enables researchers to annotate key sentences in scholarly publications with a set of discourse classes. During this process, researchers are assisted by suggestions coming from NLP tools. In addition, we present an approach to validate NLP-generated statements using microtasks in a crowdsourced setting. With this approach, we lower the barrier to entering data in the ORKG and transform content consumers into content creators. With the work presented, we strive to transform scholarly communication to improve machine-actionability of scholarly knowledge. The approaches and tools are deployed in a production environment. As a result, the majority of the presented approaches and tools are currently in active use by various research communities and already have an impact on scholarly communication.Die Zahl der wissenschaftlichen Veröffentlichungen nimmt jedes Jahr weiter zu, ebenso wie die Zahl der Zeitschriften und der aktiven Forscher. Daher werden Methoden und Werkzeuge zur Organisation von wissenschaftlichem Wissen immer wichtiger. Ohne solche Werkzeuge wird es immer schwieriger, Forschung effizient und effektiv zu betreiben. Eines der grundlegenden Probleme, mit denen die wissenschaftliche Kommunikation konfrontiert ist, betrifft das Format, in dem das Wissen publiziert wird. Die wissenschaftliche Kommunikation beruht in erster Linie auf narrativen, dokumentenbasierten Formaten, die speziell für Experten konzipiert sind. Maschinen können auf dieses Wissen nicht ohne weiteres zugreifen und es interpretieren, so dass Maschinen nicht in der Lage sind, leistungsfähige Werkzeuge zur effektiven Organisation von wissenschaftlichem Wissen bereitzustellen. In dieser Arbeit schlagen wir vor, Wissensgraphen zu nutzen, um wissenschaftliches Wissen darzustellen, zu kuratieren und zu nutzen. Die systematische Wissensrepräsentation führt zu maschinenverarbeitbarem Wissen. Dieses ermöglicht es Maschinen wissenschaftliches Wissen mit minimalem menschlichen Eingriff zu verarbeiten. Um den Wissensgraphen zu generieren und zu kuratieren, schlagen wir einen Crowdsourcing-Ansatz vor, der durch maschinelles Lernen unterstützt wird, insbesondere durch natürliche Sprachverarbeitung (NLP). Derzeit sind NLP-Techniken nicht in der Lage, qualitativ hochwertiges wissenschaftliches Wissen auf autonome Weise zu extrahieren. Mit unserem vorgeschlagenen Ansatz verknüpfen wir menschliche und maschinelle Intelligenz und nutzen so die Stärken beider Ansätze. Zunächst erörtern wir strukturiertes wissenschaftliches Wissen, wobei wir den Open Research Knowledge Graph (ORKG) vorstellen.Insbesondere konzentrieren wir uns auf das Design und die Entwicklung der ORKG-Benutzeroberfläche (das Frontend). Eine der größten Herausforderungen besteht darin, eine Schnittstelle bereitzustellen, die leistungsfähig genug ist, um umfangreiche Wissensbeschreibungen zu erstellen und gleichzeitig intuitiv genug ist für Forscher ohne technischen Hintergrund, um solche Beschreibungen zu erstellen. Der ORKG dient als technische Grundlage für die Arbeit. Zweitens konzentrieren wir uns auf vergleichbares wissenschaftliches Wissen, wofür wir das Konzept der ORKG-Vergleiche einführen. ORKG-Vergleiche bieten maschinell verwertbare Übersichten über verwandtes wissenschaftliches Wissen in tabellarischer Form. Außerdem stellen wir eine Methode vor, mit der vorhandene Literaturübersichten genutzt werden können, um ORKG-Vergleiche mit Hilfe eines Human-in-the-Loop-Ansatzes zu erstellen. Darüber hinaus zeigen wir, wie ORKG-Vergleiche verwendet werden können, um ORKG SmartReviews zu erstellen. Die SmartReviews bieten dynamische Literaturübersichten in Form von lebenden Dokumenten. Sie stellen einen Versuch dar, die Hauptschwächen der gegenwärtigen Praxis des Literaturreviews zu beheben und zu skizzieren, wie die Zukunft der Veröffentlichung von Reviews aussehen kann. Drittens konzentrieren wir uns auf die Gestaltung geeigneter Aufgaben zur Generierung von wissenschaftlichem Wissen in einer Crowdsourced-Umgebung. Wir stellen eine intelligente Benutzeroberfläche vor, die es Forschern ermöglicht, Schlüsselsätze in wissenschaftlichen Publikationen mittles Diskursklassen zu annotieren. In diesem Prozess werden Forschende mit Vorschlägen von NLP-Tools unterstützt. Darüber hinaus stellen wir einen Ansatz zur Validierung von NLP-generierten Aussagen mit Hilfe von Mikroaufgaben in einer Crowdsourced-Umgebung vor. Mit diesem Ansatz senken wir die Hürde für die Eingabe von Daten in den ORKG und setzen Inhaltskonsumenten als Inhaltsersteller ein. Mit der Arbeit streben wir eine Transformation der wissenschaftlichen Kommunikation an, um die maschinelle Verwertbarkeit von wissenschaftlichem Wissen zu verbessern. Die Ansätze und Werkzeuge werden in einer Produktionsumgebung eingesetzt. Daher werden die meisten der vorgestellten Ansätze und Werkzeuge derzeit von verschiedenen Forschungsgemeinschaften aktiv genutzt und haben bereits einen Einfluss auf die wissenschaftliche Kommunikation.EC/ERC/819536/E
    corecore