2,504 research outputs found

    Accurator: Nichesourcing for Cultural Heritage

    Full text link
    With more and more cultural heritage data being published online, their usefulness in this open context depends on the quality and diversity of descriptive metadata for collection objects. In many cases, existing metadata is not adequate for a variety of retrieval and research tasks and more specific annotations are necessary. However, eliciting such annotations is a challenge since it often requires domain-specific knowledge. Where crowdsourcing can be successfully used for eliciting simple annotations, identifying people with the required expertise might prove troublesome for tasks requiring more complex or domain-specific knowledge. Nichesourcing addresses this problem, by tapping into the expert knowledge available in niche communities. This paper presents Accurator, a methodology for conducting nichesourcing campaigns for cultural heritage institutions, by addressing communities, organizing events and tailoring a web-based annotation tool to a domain of choice. The contribution of this paper is threefold: 1) a nichesourcing methodology, 2) an annotation tool for experts and 3) validation of the methodology and tool in three case studies. The three domains of the case studies are birds on art, bible prints and fashion images. We compare the quality and quantity of obtained annotations in the three case studies, showing that the nichesourcing methodology in combination with the image annotation tool can be used to collect high quality annotations in a variety of domains and annotation tasks. A user evaluation indicates the tool is suited and usable for domain specific annotation tasks

    Crowdsourcing design guidance for contextual adaptation of text content in augmented reality

    Get PDF
    Funding Information: This work was supported by EPSRC (grants EP/R004471/1 and EP/S027432/1). Supporting data for this publication is available at https://doi.org/10.17863/CAM.62931.Augmented Reality (AR) can deliver engaging user experiences that seamlessly meld virtual content with the physical environment. However, building such experiences is challenging due to the developer's inability to assess how uncontrolled deployment contexts may infuence the user experience. To address this issue, we demonstrate a method for rapidly conducting AR experiments and real-world data collection in the user's own physical environment using a privacy-conscious mobile web application. The approach leverages the large number of distinct user contexts accessible through crowdsourcing to efciently source diverse context and perceptual preference data. The insights gathered through this method complement emerging design guidance and sample-limited lab-based studies. The utility of the method is illustrated by reexamining the design challenge of adapting AR text content to the user's environment. Finally, we demonstrate how gathered design insight can be operationalized to provide adaptive text content functionality in an AR headset.Publisher PD

    Hybrid human-AI driven open personalized education

    Get PDF
    Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer. In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer). All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result

    Components and Functions of Crowdsourcing Systems – A Systematic Literature Review

    Get PDF
    Many organizations are now starting to introduce crowdsourcing as a new model of business to outsource tasks, which are traditionally performed by a small group of people, to an undefined large workforce. While the utilization of crowdsourcing offers a lot of advantages, the development of the required system carries some risks, which are reduced by establishing a profound theoretical foundation. Thus, this article strives to gain a better understanding of what crowdsourcing systems are and what typical design aspects are considered in the development of such systems. In this paper, the author conducted a systematic literature review in the domain of crowdsourcing systems. As a result, 17 definitions of crowdsourcing systems were found and categorized into four perspectives: the organizational, the technical, the functional, and the human-centric. In the second part of the results, the author derived and presented components and functions that are implemented in a crowdsourcing system

    Hybrid human-machine information systems for data classification

    Get PDF
    Over the last decade, we have seen an intense development of machine learning approaches for solving various tasks in diverse domains. Despite the remarkable advancements in this field, there are still task categories that machine learning models fall short of the required accuracy. This is the case with tasks that require human cognitive skills, such as sentiment analysis, emotional or contextual understanding. On the other hand, human-based computation approaches, such as crowdsourcing, are popular for solving such tasks. Crowdsourcing enables access to a vast number of groups with different expertise, and if managed properly, generates high-quality results. However, crowdsourcing as a standalone approach is not scalable due to the latency and cost it brings in. Addressing the challenges and limitations that the human and machine-based approaches have distinctly requires bridging the two fields into a hybrid intelligence, seen as a promising approach to solve critical and complex real-world tasks. This thesis focuses on hybrid human-machine information systems, combining machine and human intelligence and leveraging their complementary strengths: the data processing efficiency of machine learning and the data quality generated by crowdsourcing. In this thesis, we present hybrid human-machine models to address the challenges falling into three dimensions: accuracy, latency, and cost. Solving data classification tasks in different domains has different requirements concerning accuracy, latency, and cost criteria. Motivated by this fact, we introduce a master component that evaluates these criteria to find the suitable model as a trade-off solution. In hybrid human-machine information systems, incorporating human judgments is expected to improve the accuracy of the system. Therefore, to ensure this, we focus on the human intelligence component, integrating profile-aware crowdsourcing for task assignment and data quality control mechanisms in the hybrid pipelines. The proposed conceptual hybrid human-machine models materialize in conducted experiments. Motivated by challenging scenarios and using real-world datasets, we implement the hybrid models in three experiments. Evaluations show that the implemented hybrid human-machine architectures for data classification tasks lead to better results as compared to each of the two approaches individually, improving the overall accuracy at an acceptable cost and latency

    Geoinformatics in Citizen Science

    Get PDF
    The book features contributions that report original research in the theoretical, technological, and social aspects of geoinformation methods, as applied to supporting citizen science. Specifically, the book focuses on the technological aspects of the field and their application toward the recruitment of volunteers and the collection, management, and analysis of geotagged information to support volunteer involvement in scientific projects. Internationally renowned research groups share research in three areas: First, the key methods of geoinformatics within citizen science initiatives to support scientists in discovering new knowledge in specific application domains or in performing relevant activities, such as reliable geodata filtering, management, analysis, synthesis, sharing, and visualization; second, the critical aspects of citizen science initiatives that call for emerging or novel approaches of geoinformatics to acquire and handle geoinformation; and third, novel geoinformatics research that could serve in support of citizen science

    Modeling, enacting, and integrating custom crowdsourcing processes

    Get PDF
    Crowdsourcing (CS) is the outsourcing of a unit of work to a crowd of people via an open call for contributions. Thanks to the availability of online CS platforms, such as Amazon Mechanical Turk or CrowdFlower, the practice has experienced a tremendous growth over the past few years and demonstrated its viability in a variety of fields, such as data collection and analysis or human computation. Yet it is also increasingly struggling with the inherent limitations of these platforms: each platform has its own logic of how to crowdsource work (e.g., marketplace or contest), there is only very little support for structured work (work that requires the coordination of multiple tasks), and it is hard to integrate crowdsourced tasks into stateof-the-art business process management (BPM) or information systems. We attack these three shortcomings by (1) developing a flexible CS platform (we call it Crowd Computer, or CC) that allows one to program custom CS logics for individual and structured tasks, (2) devising a BPMN-based modeling language that allows one to program CC intuitively, (3) equipping the language with a dedicated visual editor, and (4) implementing CC on top of standard BPM technology that can easily be integrated into existing software and processes. We demonstrate the effectiveness of the approach with a case study on the crowd-based mining of mashup model patterns

    Data-Driven Usability Refactoring: Tools and Challenges

    Get PDF
    Usability has long been recognized as an important software quality attribute and it has become essential in web application development and maintenance. However, it is still hard to integrate usability evaluation and improvement practices in the software development process. Moreover, these practices are usually unaffordable for small to medium-sized companies. In this position paper we propose an approach and tools to allow the crowd of web users participate in the process of usability evaluation and repair. Since we use the refactoring technique for usability improvement, we introduce the notion of “data-driven refactoring”: use data from the mass of users to learn about refactoring opportunities, plus also about refactoring effectiveness. This creates an improvement cycle where some refactorings may be discarded while others introduced, depending on their evaluated success. The paper also discusses some of the challenges that we foresee ahead
    corecore