52 research outputs found
Predicting Fraud Apps Using Hybrid Learning Approach: A Survey
Each individual in the planet are mobile phone users in fact smart-phone users with android applications. So, due to this attractiveness and well-known concept there will be a hasty growth in mobile technology. And in addition in information mining, mining the required information from a fastidious application is exceptionally troublesome. Consolidating these two ideas of ranking frauds in android market and taking out required information is gone exceptionally tough.The mobile phone Apps has developed at massive speed in some years; as for march 2017, there are nearby 2.8 million Apps at google play and 2.2 Apps at Google Apps store. In addition, there are over 400,000 self-governing app developers all fighting for the attention of the same potential clients. The Google App Store saw 128,000 new business apps alone in 2014 and the mobile gaming category alone has contest to the tune of almost 300,000 apps. Here the major need to make fraud search in Apps is by searching the high ranked applications up to 30-40 which may be ranked high in some time or the applications which are in those high ranked lists should be confirmed but this is not applied for thousands of applications added per day. So, go for wide examination by applying some procedure to every application to judge its ranking. Discovery of ranking fraud for mobile phone applications, require a flawless, fraud less and result that show correct application accordingly provide ranking; where really make it occur by searching fraud of applications. They create fraud of App by ranked high the App by methods using such human water armies and bot farms; where they create fraud by downloading application through different devices and provide fake ratings and reviews. So, extract critical data connecting particular application such as review which was called comments and lots of other information, to mine and place algorithm to identify fakeness in application rank
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection
Humans are the final decision makers in critical tasks that involve ethical
and legal concerns, ranging from recidivism prediction, to medical diagnosis,
to fighting against fake news. Although machine learning models can sometimes
achieve impressive performance in these tasks, these tasks are not amenable to
full automation. To realize the potential of machine learning for improving
human decisions, it is important to understand how assistance from machine
learning models affects human performance and human agency.
In this paper, we use deception detection as a testbed and investigate how we
can harness explanations and predictions of machine learning models to improve
human performance while retaining human agency. We propose a spectrum between
full human agency and full automation, and develop varying levels of machine
assistance along the spectrum that gradually increase the influence of machine
predictions. We find that without showing predicted labels, explanations alone
slightly improve human performance in the end task. In comparison, human
performance is greatly improved by showing predicted labels (>20% relative
improvement) and can be further improved by explicitly suggesting strong
machine performance. Interestingly, when predicted labels are shown,
explanations of machine predictions induce a similar level of accuracy as an
explicit statement of strong machine performance. Our results demonstrate a
tradeoff between human performance and human agency and show that explanations
of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo
available at https://deception.machineintheloop.co
Identifying and Profiling Radical Reviewer Collectives in Digital Product Reviews
Ecommerce sites are flooded with spam reviews and opinions. People are usually hired to impede or promote particular brands by writing extremely negative or positive reviews. It is usually performed in groups. Various studies have been conducted to identify and scan those spam groups. However, there is still a knowledge gap when it comes to detecting groups targeting a brand, instead of products only. In this study, we conducted a systematic review of recent studies related to detection of extremist reviewer groups. Most of the researchers have extracted these groups with a data mining approach over brand similarities so that users are clustered. This study is an attempt to detect spammers with various models tested by various reviewers. This study presents proven conceptual models and algorithms which have been presented in previous studies to compute the spamming level of extremist reviewers in ecommerce sites and online marketplace
- …