39 research outputs found

    A robust consistency model of crowd workers in text labeling tasks

    Get PDF
    Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0

    A framework for cloud-based healthcare services to monitor noncommunicable diseases patient

    Full text link
    Monitoring patients who have noncommunicable diseases is a big challenge. These illnesses require a continuous monitoring that leads to high cost for patients\u27 healthcare. Several solutions proposed reducing the impact of these diseases in terms of economic with respect to quality of services. One of the best solutions is mobile healthcare, where patients do not need to be hospitalized under supervision of caregivers. This paper presents a new hybrid framework based on mobile multimedia cloud that is scalable and efficient and provides cost-effective monitoring solution for noncommunicable disease patient. In order to validate the effectiveness of the framework, we also propose a novel evaluation model based on Analytical Hierarchy Process (AHP), which incorporates some criteria from multiple decision makers in the context of healthcare monitoring applications. Using the proposed evaluation model, we analyzed three possible frameworks (proposed hybrid framework, mobile, and multimedia frameworks) in terms of their applicability in the real healthcare environment

    Credibility in Online Social Networks: A Survey

    Get PDF
    The importance of information credibility in society cannot be underestimated given that it is at the heart of all decision-making. Generally, more information is better; however, knowing the value of this information is essential for the decision-making processes. Information credibility defines a measure of the fitness of the information for consumption. It can also be defined in terms of reliability, which denotes the probability that a data source will appear credible to the users. A challenge in this topic is that there is a great deal of literature that has developed different credibility dimensions. In addition, information science dealing with online social networks has grown in complexity, attracting interest from researchers in information science, psychology, human–computer interaction, communication studies, and management studies, all of whom have studied the topic from different perspectives. This work will attempt to provide an overall review of the credibility assessment literature over the period 2006–2017 as applied to the context of the microblogging platform, Twitter. The known interpretations of credibility will be examined, particularly as they relate to the Twitter environment. In addition, we investigate levels of credibility assessment features. We then discuss recent works, addressing a new taxonomy of credibility analysis and assessment techniques. At last, a cross-referencing of literature is performed while suggesting new topics for future studies of credibility assessment in a social media context

    Classification of ankle joint movements based on surface electromyography signals for rehabilitation robot applications

    Get PDF
    Electromyography (EMG)-based control is the core of prostheses, orthoses, and other rehabilitation devices in recent research. Nonetheless, EMG is difficult to use as a control signal given the complex nature of the signal. To overcome this problem, the researchers employed a pattern recognition technique. EMG pattern recognition mainly involves four stages: signal detection, preprocessing feature extraction, dimensionality reduction, and classification. In particular, the success of any pattern recognition technique depends on the feature extraction stage. In this study, a modified time-domain features set and logarithmic transferred time-domain features (LTD) were evaluated and compared with other traditional time-domain features set (TTD). Three classifiers were employed to assess the two feature sets, namely linear discriminant analysis (LDA), k nearest neighborhood, and Naïve Bayes. Results indicated the superiority of the new time-domain feature set LTD, on conventional time-domain features TTD with the average classification accuracy of 97.23 %. In addition, the LDA classifier outperformed the other two classifiers considered in this study

    Leveraging BERT Language Model for Arabic Long Document Classification

    Full text link
    Given the number of Arabic speakers worldwide and the notably large amount of content in the web today in some fields such as law, medicine, or even news, documents of considerable length are produced regularly. Classifying those documents using traditional learning models is often impractical since extended length of the documents increases computational requirements to an unsustainable level. Thus, it is necessary to customize these models specifically for long textual documents. In this paper we propose two simple but effective models to classify long length Arabic documents. We also fine-tune two different models-namely, Longformer and RoBERT, for the same task and compare their results to our models. Both of our models outperform the Longformer and RoBERT in this task over two different datasets

    Identification services for online social networks (OSNs) extended abstract

    No full text
    On-line Social Networks (OSNs) have dramatically changed how users connect, communicate, share content, and exchange goods and services. However, despite all the benefits and the flexibility that OSNs provide, their users become more reliant on online identities with often no means to know who really is behind an online profile. Indeed, to facilitate their adoption and encourage people to join, identities in OSNs are very loose, in that not much more than an email address is required to create an account and related profile. Therefore, the problem of fake accounts and identity related attacks in OSNs has attracted considerable interest from the research community, and resulted in several proposals that mainly aim at detecting malicious nodes that follow identified and formalized attack trends. Without denying the importance of formalizing Sybil attacks and suggesting solutions for their detection, in this extended abstract we also consider the issue of identity validation from a user perspective, by briefly discussing the research proposals aiming at empowering users with tools helping them to identify the validity of the online accounts they interact with

    A Time-Series-Based New Behavior Trace Model for Crowd Workers That Ensures Quality Annotation

    No full text
    Crowdsourcing is a new mode of value creation in which organizations leverage numerous Internet users to accomplish tasks. However, because these workers have different backgrounds and intentions, crowdsourcing suffers from quality concerns. In the literature, tracing the behavior of workers is preferred over other methodologies such as consensus methods and gold standard approaches. This paper proposes two novel models based on workers’ behavior for task classification. These models newly benefit from time-series features and characteristics. The first model uses multiple time-series features with a machine learning classifier. The second model converts time series into images using the recurrent characteristic and applies a convolutional neural network classifier. The proposed models surpass the current state of-the-art baselines in terms of performance. In terms of accuracy, our feature-based model achieved 83.8%, whereas our convolutional neural network model achieved 76.6%

    Healthcare knowledge graph construction: A systematic review of the state-of-the-art, open issues, and opportunities

    No full text
    Abstract The incorporation of data analytics in the healthcare industry has made significant progress, driven by the demand for efficient and effective big data analytics solutions. Knowledge graphs (KGs) have proven utility in this arena and are rooted in a number of healthcare applications to furnish better data representation and knowledge inference. However, in conjunction with a lack of a representative KG construction taxonomy, several existing approaches in this designated domain are inadequate and inferior. This paper is the first to provide a comprehensive taxonomy and a bird’s eye view of healthcare KG construction. Additionally, a thorough examination of the current state-of-the-art techniques drawn from academic works relevant to various healthcare contexts is carried out. These techniques are critically evaluated in terms of methods used for knowledge extraction, types of the knowledge base and sources, and the incorporated evaluation protocols. Finally, several research findings and existing issues in the literature are reported and discussed, opening horizons for future research in this vibrant area
    corecore