1,303 research outputs found

    Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments

    Get PDF
    The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive. Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2). An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics. Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis: 1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles). 2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process. 3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type. 4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.:Summary: Hetmank, L. (2014). Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Summary). Article 1: Hetmank, L. (2013). Components and Functions of Crowdsourcing Systems – A Systematic Literature Review. In 11th International Conference on Wirtschaftsinformatik (WI). Leipzig. Article 2: Hetmank, L. (2014). A Synopsis of Enterprise Crowdsourcing Literature. In 22nd European Conference on Information Systems (ECIS). Tel Aviv. Article 3: Hetmank, L. (2013). Towards a Semantic Standard for Enterprise Crowdsourcing – A Scenario-based Evaluation of a Conceptual Prototype. In 21st European Conference on Information Systems (ECIS). Utrecht. Article 4: Hetmank, L. (2014). Developing an Ontology for Enterprise Crowdsourcing. In Multikonferenz Wirtschaftsinformatik (MKWI). Paderborn. Article 5: Hetmank, L. (2014). An Ontology for Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Technical Report). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155187

    Web 3.0 and Crowdservicing

    Get PDF
    The World Wide Web (WWW) has undergone significant evolution in the past decade. The emerging web 3.0 is characterized by the vision of achieving a balanced integration of services provided by machines and human agents. This is also the logic of ‘crowdservicing’ which has led to the creation of platforms on which new applications and even enterprises can be created, and complex, web-scale problem solving endeavors undertaken by flexibly connecting billions of loosely coupled computational agents or web services as well as human, service provider agents. In this paper, we build on research and development in the growing area of crowdsourcing to develop the concept of crowdservicing. We also present a novel crowdservicing application prototype, OntoAssist, to facilitate ontology evolution as an illustration of the concept. OntoAssist integrates the computational features of an existing search engine with the human computation provided by the crowd of users to find desirable search results

    Validation and Evaluation

    Get PDF
    In this technical report, we present prototypical implementations of innovative tools and methods for personalized and contextualized (multimedia) search, collaborative ontology evolution, ontology evaluation and cost models, and dynamic access and trends in distributed (semantic) knowledge, developed according to the working plan outlined in Technical Report TR-B-12-04. The prototypes complete the next milestone on the path to an integral Corporate Semantic Web architecture based on the three pillars Corporate Ontology Engineering, Corporate Semantic Collaboration, and Corporate Semantic Search, as envisioned in TR-B-08-09

    Task Recommendation in Crowdsourcing Platforms

    Get PDF
    Task distribution platforms, such as micro-task markets, project assignment portals, and job search engines, support the assignment of tasks to workers. Public crowdsourcing platforms support the assignment of tasks in micro-task markets to help task requesters to complete their tasks and allow workers to earn money. Enterprise crowdsourcing platforms provide a marketplace within enterprises for the internal placement of tasks from employers to employees. Most of both types of task distribution platforms rely on the workers' selection capabilities or provide simple filtering steps to reduce the number of tasks a worker can choose from. This self-selection mechanism unfortunately allows for tasks to be performed by under- or over-qualified workers. Supporting the workers by introducing a task recommender system helps to solve such deficits of existing task distributions. In this thesis, the requirements towards task recommendation in task distribution platforms are gathered with a focus on the worker's perspective, the design of appropriate assignment strategies is described, and innovative methods to recommend tasks based on their textual descriptions are provided. Different viewpoints are taken into account by analyzing the domains of micro-tasks, project assignments, and job postings. The requirements of enterprise crowdsourcing platforms are compiled based on the literature and a qualitative study, providing a conceptual design of task assignment strategies. The demands of workers and their perception of task similarity on public crowdsourcing platforms are identified, leading to the design and implementation of additional methods to determine the similarity of micro-tasks. The textual descriptions of micro-tasks, projects, and job postings are analyzed in order to provide innovative methods for task recommendation in these domains

    Collaborative framework in computer aided innovation 2.0 : Application to process system engineering

    Get PDF
    In economy nowadays, the act of innovation is in general social; it requires the management of knowledge, and the techniques and methodologies to drive it. Innovation is not the product of one isolated intelligence, instead, it is the result of a multi-disciplinary workgroup lead by a process or a methodology. The conceptual design, which is found in the first stages of the innovation process, represents one of the most important challenges in industry nowadays. One of the main challenges faced by chemical industries related to the conceptual design phase is to provide the means in the form of methods and computational tools, for solving problems systematically, at the same time that benefiting from the collective efforts of individual intelligences involved. Hence, the main objective of this work is to provide a solution to improve the creative capacity of a team involved in the innovation process, in particular the preliminary (critical) phase of conceptual design. Consequently, it is important to understand the techniques, methods and tools that best support the generation of novel ideas and creative solutions. In addition, it is necessary to study the contribution of information and communication technologies as the mean to support collaboration. Web technologies are considered as complementary tools to implement methods and techniques in collaborative design, and particularly in the conceptual design stage. These technologies allow setting up distributed collaborative environments to bring together the resources and the experts who can relate the existing pieces of knowledge to new contexts. It is the synergy created in this kind of environment, which allow producing valuable concepts and ideas in the form of Collective Intelligence. Nevertheless in most existing solutions for collective intelligence or crowdsourcing environments, they do not report the use of a particular methodology to improve the participants' creativity. The solution in this work describes a social network service that enables users to cooperatively solve problems oriented (but not limited) to the phase of conceptual design. In this work we propose that the use of Collective Intelligence in combination with the model TRIZ-CBR could lead the creative efforts in a team to develop innovative solutions. With this work we are looking for connecting experts from one particular field, TRIZ practitioners and stakeholders with the objective to solve problems in collaboration unlashing the collective intelligence to improve creativity. This work uses the basis of the concept named "Open CAI 2.0" to propose a solution in the form of a theoretical framework. The contributions seek to move the development of the field in Computer Aided Innovation a step forward

    Crowdsourcing a Word-Emotion Association Lexicon

    Full text link
    Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word-emotion and word-polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion

    Hybrid human-AI driven open personalized education

    Get PDF
    Attaining those skills that match labor market demand is getting increasingly complicated as prerequisite knowledge, skills, and abilities are evolving dynamically through an uncontrollable and seemingly unpredictable process. Furthermore, people's interests in gaining knowledge pertaining to their personal life (e.g., hobbies and life-hacks) are also increasing dramatically in recent decades. In this situation, anticipating and addressing the learning needs are fundamental challenges to twenty-first century education. The need for such technologies has escalated due to the COVID-19 pandemic, where online education became a key player in all types of training programs. The burgeoning availability of data, not only on the demand side but also on the supply side (in the form of open/free educational resources) coupled with smart technologies, may provide a fertile ground for addressing this challenge. Therefore, this thesis aims to contribute to the literature about the utilization of (open and free-online) educational resources toward goal-driven personalized informal learning, by developing a novel Human-AI based system, called eDoer. In this thesis, we discuss all the new knowledge that was created in order to complete the system development, which includes 1) prototype development and qualitative user validation, 2) decomposing the preliminary requirements into meaningful components, 3) implementation and validation of each component, and 4) a final requirement analysis followed by combining the implemented components in order develop and validate the planned system (eDoer). All in all, our proposed system 1) derives the skill requirements for a wide range of occupations (as skills and jobs are typical goals in informal learning) through an analysis of online job vacancy announcements, 2) decomposes skills into learning topics, 3) collects a variety of open/free online educational resources that address those topics, 4) checks the quality of those resources and topic relevance using our developed intelligent prediction models, 5) helps learners to set their learning goals, 6) recommends personalized learning pathways and learning content based on individual learning goals, and 7) provides assessment services for learners to monitor their progress towards their desired learning objectives. Accordingly, we created a learning dashboard focusing on three Data Science related jobs and conducted an initial validation of eDoer through a randomized experiment. Controlling for the effects of prior knowledge as assessed by the pretest, the randomized experiment provided tentative support for the hypothesis that learners who engaged with personal eDoer recommendations attain higher scores on the posttest than those who did not. The hypothesis that learners who received personalized content in terms of format, length, level of detail, and content type, would achieve higher scores than those receiving non-personalized content was not supported as a statistically significant result

    ECSCW 2013 Adjunct Proceedings The 13th European Conference on Computer Supported Cooperative Work 21 - 25. September 2013, Paphos, Cyprus

    Get PDF
    This volume presents the adjunct proceedings of ECSCW 2013.While the proceedings published by Springer Verlag contains the core of the technical program, namely the full papers, the adjunct proceedings includes contributions on work in progress, workshops and master classes, demos and videos, the doctoral colloquium, and keynotes, thus indicating what our field may become in the future

    Using the Collective Intelligence for inventive problem solving: A contribution for Open Computer Aided Innovation

    Get PDF
    In the industrial context, an interest exists in the collective resolution of creative problems during the conceptual design phase. In this work we introduce an information-based software framework useful to collaborate for inventive problems solving. This framework proposes the implementation of techniques from the Collective Intelligence (CI) research field in combination with the systematic methods provided by the TRIZ theory. Both approaches are centered in the human aspect of the innovation process, and are complementary. While CI focuses on the intelligent behavior that emerges in collaborative work, the TRIZ theory is centered in the individual capacity to solve problems systematically. The framework's objective is to improve the individual creativity provided by the TRIZ method and tools, with the value created by the collective contributions. This work aims to contribute formulating the basis to extend the research field of Computer Aided Innovation, to the next evolutionary step called “Open CAI 2.0”
    • 

    corecore