806 research outputs found
Understanding the Detection of View Fraud in Video Content Portals
While substantial effort has been devoted to understand fraudulent activity
in traditional online advertising (search and banner), more recent forms such
as video ads have received little attention. The understanding and
identification of fraudulent activity (i.e., fake views) in video ads for
advertisers, is complicated as they rely exclusively on the detection
mechanisms deployed by video hosting portals. In this context, the development
of independent tools able to monitor and audit the fidelity of these systems
are missing today and needed by both industry and regulators.
In this paper we present a first set of tools to serve this purpose. Using
our tools, we evaluate the performance of the audit systems of five major
online video portals. Our results reveal that YouTube's detection system
significantly outperforms all the others. Despite this, a systematic evaluation
indicates that it may still be susceptible to simple attacks. Furthermore, we
find that YouTube penalizes its videos' public and monetized view counters
differently, the former being more aggressive. This means that views identified
as fake and discounted from the public view counter are still monetized. We
speculate that even though YouTube's policy puts in lots of effort to
compensate users after an attack is discovered, this practice places the burden
of the risk on the advertisers, who pay to get their ads displayed.Comment: To appear in WWW 2016, Montr\'eal, Qu\'ebec, Canada. Please cite the
conference version of this pape
Not-a-Bot (NAB): Improving Service Availability in the Face of Botnet Attacks
A large fraction of email spam, distributed denial-of-service (DDoS) attacks, and click-fraud on web advertisements are caused by traffic sent from compromised machines that form botnets. This paper posits that by identifying human-generated traffic as such, one can service it with improved reliability or higher priority, mitigating the effects of botnet attacks.
The key challenge is to identify human-generated traffic in the absence of strong unique identities. We develop NAB (``Not-A-Bot''), a system to approximately identify and certify human-generated activity. NAB uses a small trusted software component called an attester, which runs on the client machine with an untrusted OS and applications. The attester tags each request with an attestation if the request is made within a small amount of time of legitimate keyboard or mouse activity. The remote entity serving the request sends the request and attestation to a verifier, which checks the attestation and implements an application-specific policy for attested requests.
Our implementation of the attester is within the Xen hypervisor. By analyzing traces of keyboard and mouse activity from 328 users at Intel, together with adversarial traces of spam, DDoS, and click-fraud activity, we estimate that NAB reduces the amount of spam that currently passes through a tuned spam filter by more than 92%, while not flagging any legitimate email as spam. NAB delivers similar benefits to legitimate requests under DDoS and click-fraud attacks
Covert Botnet Implementation and Defense Against Covert Botnets
The advent of the Internet and its benevolent use has benefited mankind in private and business use alike. However, like any other technology, the Internet is often used for malevolent purposes. One such malevolent purpose is to attack computers using botnets. Botnets are stealthy, and the victims are typically unaware of the malicious activities and the resultant havoc they can cause. Computer security experts seek to combat the botnet menace. However, attackers come up with new botnet designs that exploit the weaknesses in existing defense mechanisms and, thus, continue to evade detection. Therefore, it is necessary to analyze the weaknesses of existing defense mechanisms to find the lacunae in them and design new models of bot infection before the attackers do so. It is also necessary to validate the analysis and the design of such a model by implementing the attack and fine-tuning the design. This thesis validates the weaknesses found in existing defense mechanisms against botnets by implementing a new model of botnet and carrying out experiments on it. To merely analyze and present the weaknesses of a defense would open the door for attackers and make their job easier. Thus, creating a defense mechanism against the new attack is equally important. This thesis proposes a design against the new model of bot infection and also implements the design. Experiments were conducted to validate and fine-tune the design and eliminate flaws in the new defense mechanism
REAL-TIME AD CLICK FRAUD DETECTION
With the increase in Internet usage, it is now considered a very important platform for advertising and marketing. Digital marketing has become very important to the economy: some of the major Internet services available publicly to users are free, thanks to digital advertising. It has also allowed the publisher ecosystem to flourish, ensuring significant monetary incentives for creating quality public content, helping to usher in the information age. Digital advertising, however, comes with its own set of challenges. One of the biggest challenges is ad fraud. There is a proliferation of malicious parties and software seeking to undermine the ecosystem and causing monetary harm to digital advertisers and ad networks. Pay-per-click advertising is especially susceptible to click fraud, where each click is highly valuable. This leads advertisers to lose money and ad networks to lose their credibility, hurting the overall ecosystem. Much of the fraud detection is done in offline data pipelines, which compute fraud/non-fraud labels on clicks long after they happened. This is because click fraud detection usually depends on complex machine learning models using a large number of features on huge datasets, which can be very costly to train and lookup. In this thesis, the existence of low-cost ad click fraud classifiers with reasonable precision and recall is hypothesized. A set of simple heuristics as well as basic machine learning models (with associated simplified feature spaces) are compared with complex machine learning models, on performance and classification accuracy. Through research and experimentation, a performant classifier is discovered which can be deployed for real-time fraud detection
The White-hat Bot: A Novel Botnet Defense Strategy
Botnets are a threat to computer systems and users around the world. Botmasters can range from annoying spam email propagators to nefarious criminals. These criminals attempt to take down networks or web servers through distributed denial-of-service attacks, to steal corporate secrets, or to launder money from individuals or corporations. As the number and severity of successful botnet attacks rise, computer security experts need to develop better early-detection and removal techniques to protect computer networks and individual computer users from these very real threats. I will define botnets and describe some of their common purposes and current uses. Next, I will reveal some of the techniques currently used by software security professionals to combat this problem. Finally I will provide a novel defensive strategy, the White-hat Bot (WHB), with documented experiments and results that may prove useful in the defense against botnets in the future
- …