31 research outputs found

    Keyword Targeting Optimization in Sponsored Search Advertising: Combining Selection and Matching

    Full text link
    In sponsored search advertising (SSA), advertisers need to select keywords and determine matching types for selected keywords simultaneously, i.e., keyword targeting. An optimal keyword targeting strategy guarantees reaching the right population effectively. This paper aims to address the keyword targeting problem, which is a challenging task because of the incomplete information of historical advertising performance indices and the high uncertainty in SSA environments. First, we construct a data distribution estimation model and apply a Markov Chain Monte Carlo method to make inference about unobserved indices (i.e., impression and click-through rate) over three keyword matching types (i.e., broad, phrase and exact). Second, we formulate a stochastic keyword targeting model (BB-KSM) combining operations of keyword selection and keyword matching to maximize the expected profit under the chance constraint of the budget, and develop a branch-and-bound algorithm incorporating a stochastic simulation process for our keyword targeting model. Finally, based on a realworld dataset collected from field reports and logs of past SSA campaigns, computational experiments are conducted to evaluate the performance of our keyword targeting strategy. Experimental results show that, (a) BB-KSM outperforms seven baselines in terms of profit; (b) BB-KSM shows its superiority as the budget increases, especially in situations with more keywords and keyword combinations; (c) the proposed data distribution estimation approach can effectively address the problem of incomplete performance indices over the three matching types and in turn significantly promotes the performance of keyword targeting decisions. This research makes important contributions to the SSA literature and the results offer critical insights into keyword management for SSA advertisers.Comment: 38 pages, 4 figures, 5 table

    Improving customer generation by analysing website visitor behaviour

    Get PDF
    This dissertation describes the creation of a new integrated Information Technology (IT) system that assisted in the collection of data about the behaviour of website visitors as well as sales and marketing data for those visitors who turned into customers. A key contribution to knowledge was the creation of a method to predict the outcome of visits to a website from visitors’ browsing behaviour. A new Online Tracking Module (OTM) was created that monitored visitors’ behaviour while they browsed websites. When a visitor converted into a customer, then customer and marketing data as well as sales activity was saved in a new Customer Relationship Management (CRM) system that was implemented in this research. The research focused on service websites. The goal of these websites was to promote products and services online and turn enquiries into offline sales. The challenge faced by these websites was to convince as many visitors as possible to enquire. Most websites relied on Search Engine Optimisation (SEO) and Pay Per Click (PPC) advertising for traffic generation. This research used PPC advertising to generate traffic. An important aspect of PPC advertising was landing page optimisation. The aim of landing page optimisation was to increase the number of visitors to a website who completed a specific action on the website. In the case of the websites investigated in this research the action consisted of completing and sending an enquiry form from the websites. The research looked for meaningful commonalities in the data collected by MS CRM and the OTM and combined this with feedback from the collaborating company’s sales team to create two personas for website visitors who had enquired. Techniques for improving landing pages were identified and these led to changes to landing pages. Some of these changes were targeted at a particular visitor persona. The effect of changes made to a landing page was measured by comparing its conversion rate and bounce rate before and after the changes. Behavioural data collected by the OTM was then analysed using a data mining engine to find models that could predict whether a user would convert based on their browsing behaviour. Models were found that could predict the outcome of a visit to a service website.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Click fraud : how to spot it, how to stop it?

    Get PDF
    Online search advertising is currently the greatest source of revenue for many Internet giants such as Google™, Yahoo!™, and Bing™. The increased number of specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth is however click fraud. Trained botnets and even individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses. Most academics and consultants who study online advertising estimate that 15% to 35% of ads in pay per click (PPC) online advertising systems are not authentic. In the first two quarters of 2010, US marketers alone spent 5.7billiononPPCads,wherePPCadsarebetween45and50percentofallonlineadspending.Onaverageabout5.7 billion on PPC ads, where PPC ads are between 45 and 50 percent of all online ad spending. On average about 1.5 billion is wasted due to click-fraud. These fraudulent clicks are believed to be initiated by users in poor countries, or botnets, who are trained to click on specific ads. For example, according to a 2010 study from Information Warfare Monitor, the operators of Koobface, a program that installed malicious software to participate in click fraud, made over $2 million in just over a year. The process of making such illegitimate clicks to generate revenue is called click-fraud. Search engines claim they filter out most questionable clicks and either not charge for them or reimburse advertisers that have been wrongly billed. However this is a hard task, despite the claims that brokers\u27 efforts are satisfactory. In the simplest scenario, a publisher continuously clicks on the ads displayed on his own website in order to make revenue. In a more complicated scenario. a travel agent may hire a large, globally distributed, botnet to click on its competitor\u27s ads, hence depleting their daily budget. We analyzed those different types of click fraud methods and proposed new methodologies to detect and prevent them real time. While traditional commercial approaches detect only some specific types of click fraud, Collaborative Click Fraud Detection and Prevention (CCFDP) system, an architecture that we have implemented based on the proposed methodologies, can detect and prevents all major types of click fraud. The proposed solution analyzes the detailed user activities on both, the server side and client side collaboratively to better describe the intention of the click. Data fusion techniques are developed to combine evidences from several data mining models and to obtain a better estimation of the quality of the click traffic. Our ideas are experimented through the development of the Collaborative Click Fraud Detection and Prevention (CCFDP) system. Experimental results show that the CCFDP system is better than the existing commercial click fraud solution in three major aspects: 1) detecting more click fraud especially clicks generated by software; 2) providing prevention ability; 3) proposing the concept of click quality score for click quality estimation. In the CCFDP initial version, we analyzed the performances of the click fraud detection and prediction model by using a rule base algorithm, which is similar to most of the existing systems. We have assigned a quality score for each click instead of classifying the click as fraud or genuine, because it is hard to get solid evidence of click fraud just based on the data collected, and it is difficult to determine the real intention of users who make the clicks. Results from initial version revealed that the diversity of CF attack Results from initial version revealed that the diversity of CF attack types makes it hard for a single counter measure to prevent click fraud. Therefore, it is important to be able to combine multiple measures capable of effective protection from click fraud. Therefore, in the CCFDP improved version, we provide the traffic quality score as a combination of evidence from several data mining algorithms. We have tested the system with a data from an actual ad campaign in 2007 and 2008. We have compared the results with Google Adwords reports for the same campaign. Results show that a higher percentage of click fraud present even with the most popular search engine. The multiple model based CCFDP always estimated less valid traffic compare to Google. Sometimes the difference is as high as 53%. Detection of duplicates, fast and efficient, is one of the most important requirement in any click fraud solution. Usually duplicate detection algorithms run in real time. In order to provide real time results, solution providers should utilize data structures that can be updated in real time. In addition, space requirement to hold data should be minimum. In this dissertation, we also addressed the problem of detecting duplicate clicks in pay-per-click streams. We proposed a simple data structure, Temporal Stateful Bloom Filter (TSBF), an extension to the regular Bloom Filter and Counting Bloom Filter. The bit vector in the Bloom Filter was replaced with a status vector. Duplicate detection results of TSBF method is compared with Buffering, FPBuffering, and CBF methods. False positive rate of TSBF is less than 1% and it does not have false negatives. Space requirement of TSBF is minimal among other solutions. Even though Buffering does not have either false positives or false negatives its space requirement increases exponentially with the size of the stream data size. When the false positive rate of the FPBuffering is set to 1% its false negative rate jumps to around 5%, which will not be tolerated by most of the streaming data applications. We also compared the TSBF results with CBF. TSBF uses only half the space or less than standard CBF with the same false positive probability. One of the biggest successes with CCFDP is the discovery of new mercantile click bot, the Smart ClickBot. We presented a Bayesian approach for detecting the Smart ClickBot type clicks. The system combines evidence extracted from web server sessions to determine the final class of each click. Some of these evidences can be used alone, while some can be used in combination with other features for the click bot detection. During training and testing we also addressed the class imbalance problem. Our best classifier shows recall of 94%. and precision of 89%, with F1 measure calculated as 92%. The high accuracy of our system proves the effectiveness of the proposed methodology. Since the Smart ClickBot is a sophisticated click bot that manipulate every possible parameters to go undetected, the techniques that we discussed here can lead to detection of other types of software bots too. Despite the enormous capabilities of modern machine learning and data mining techniques in modeling complicated problems, most of the available click fraud detection systems are rule-based. Click fraud solution providers keep the rules as a secret weapon and bargain with others to prove their superiority. We proposed validation framework to acquire another model of the clicks data that is not rule dependent, a model that learns the inherent statistical regularities of the data. Then the output of both models is compared. Due to the uniqueness of the CCFDP system architecture, it is better than current commercial solution and search engine/ISP solution. The system protects Pay-Per-Click advertisers from click fraud and improves their Return on Investment (ROI). The system can also provide an arbitration system for advertiser and PPC publisher whenever the click fraud argument arises. Advertisers can gain their confidence on PPC advertisement by having a channel to argue the traffic quality with big search engine publishers. The results of this system will booster the internet economy by eliminating the shortcoming of PPC business model. General consumer will gain their confidence on internet business model by reducing fraudulent activities which are numerous in current virtual internet world

    Essentials of Business Analytics

    Get PDF

    Drei Studien zu Analyse und Management von Online-Konsumentenverhalten

    Get PDF
    Over the last two decades, the Internet has fundamentally changed the ways firms and consumers interact. The ongoing evolution of the Internet-enabled market environment entails new challenges for marketing research and practice, including the emergence of innovative business models, a proliferation of marketing channels, and an unknown wealth of data. This dissertation addresses these issues in three individual essays. Study 1 focuses on business models offering services for free, which have become increasingly prevalent in the online sector. Offering services for free raises new questions for service providers as well as marketing researchers: How do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? Based on a literature review and depth interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including not only word of mouth, co-production, and network effects but also attention and data as two new dimensions, which have been disregarded in marketing research. By putting their findings in the context of existing literature on customer value and customer engagement, the authors do not only shed light on the complex processes of value creation in the emerging e-service industry but also advance marketing and service research in general. Studies 2 and 3 investigate the analysis of online multichannel consumer behavior in times of big data. Firms can choose from a plethora of channels to reach consumers on the Internet, such that consumers often use a number of different channels along the customer journey. While the unprecedented availability of individual-level data enables new insights into multichannel consumer behavior, it also makes high demands on the efficiency and scalability of research approaches. Study 2 addresses the challenge of attributing credit to different channels along the customer journey. Because advertisers often do not know to what degree each channel actually contributes to their marketing success, this attribution challenge is of great managerial interest, yet academic approaches to it have not found wide application in practice. To increase practical acceptance, Study 2 introduces a graph-based framework to analyze multichannel online customer path data as first- and higher-order Markov walks. According to a comprehensive set of criteria for attribution models, embracing both scientific rigor and practical applicability, four model variations are evaluated on four, large, real-world data sets from different industries. Results indicate substantial differences to existing heuristics such as “last click wins” and demonstrate that insights into channel effectiveness cannot be generalized from single data sets. The proposed framework offers support to practitioners by facilitating objective budget allocation and improving team decisions and allows for future applications such as real-time bidding. Study 3 investigates how channel usage along the customer journey facilitates inferences on underlying purchase decision processes. To handle increasing complexity and sparse data in online multichannel environments, the author presents a new categorization of online channels and tests the approach on two large clickstream data sets using a proportional hazard model with time-varying covariates. By categorizing channels along the dimensions of contact origin and branded versus generic usage, Study 3 finds meaningful interaction effects between contacts across channel types, corresponding to the theory of choice sets. Including interactions based on the proposed categorization significantly improves model fit and outperforms alternative specifications. The results will help retailers gain a better understanding of customers’ decision-making progress in an online multichannel environment and help them develop individualized targeting approaches for real-time bidding. Using a variety of methods including qualitative interviews, Markov graphs, and survival models, this dissertation does not only advance knowledge on analyzing and managing online consumer behavior but also adds new perspectives to marketing and service research in general.Das Internet hat die Interaktion zwischen Unternehmen und Kunden grundlegend verändert. Die Etablierung eines interfähigen Marktumfelds bringt neuartige Herausforderungen für Marketingforschung und -praxis mit sich. Dazu zählt die Entstehung von innovativen Geschäftsmodellen ebenso wie eine Vervielfachung der verfügbaren Marketingkanäle und eine bislang unbekannte Fülle an Daten. Die vorliegende Dissertation untersucht diese Herausforderungen in drei unabhängigen Studien

    Civil Good - A Platform For Sustainable and Inclusive Online Discussion

    Get PDF
    Civil Good is a website concept proposed by Alan Mandel with the goal of enabling safe, anonymous, productive, and civil discourse without the disruptive behavior and language common to much of the Internet. The goal of Civil Good is to improve the critical thinking and discussion skills of its users while combating the effects of political polarization and misinformation in society. This paper analyzes Mandel\u27s proposed concept, providing additional research to either support or refute the various features proposed, and recommendations to simplify user interactions. It also examines topics mentioned only briefly or not discussed by Mandel, such as data protection methods, the psychology of Web browsing, marketing, operational costs, legal issues, monetization options, and mobile presence

    Commercial communication in the digital age : information or disinformation?

    Get PDF
    In today’s digital age, online and mobile advertising are of growing importance, with advertising no longer bound to the traditional media industry. Although the advertising industry still has broader access to the different measures and channels, users and consumers today have more possibilities topublish, get informed or communicate – to “co-create” –, and toreach a bigger audience. There is a good chance thus that users and consumers are better informed about the objectives and persuasive tricks of the advertising industry than ever before. At the same time, advertisers can inform about products and services without the limitations of time and place faced by traditional mass media. But will there really be a time when advertisers and consumers have equal power, or does tracking users online and offline lead to a situation where advertisers have more information about the consumers than ever before? The volume discusses these questionsand related issues

    CORPORATE SOCIAL RESPONSIBILITY IN ROMANIA

    Get PDF
    The purpose of this paper is to identify the main opportunities and limitations of corporate social responsibility (CSR). The survey was defined with the aim to involve the highest possible number of relevant CSR topics and give the issue a more wholesome perspective. It provides a basis for further comprehension and deeper analyses of specific CSR areas. The conditions determining the success of CSR in Romania have been defined in the paper on the basis of the previously cumulative knowledge as well as the results of various researches. This paper provides knowledge which may be useful in the programs promoting CSR.Corporate social responsibility, Supportive policies, Romania

    First steps in the study of cyber-psycho-cognitive operations

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Relações Internacionais, Programa de Pós-Graduação em Relações Internacionais, 2019.O presente trabalho é uma análise dos mecanismos informáticos e tecno-comunicacionais envolvidos na articulação de mundos da vida orientados estrategicamente para estimular, prever ou minar o desenvolvimento das condições psico-cognitivas adequadas para a construção e sustento da legitimidade racional de uma autoridade ou ação política. A aplicação de instrumentos “arqueológicos” Foucauldianos ao estudo das narrativas políticas que engendraram e surgiram de “Russiagate” permitiu situar a teoria num contexto histórico e validar a premissa da convergência e incorporação de tendências de agendamento comuns e de práticas típicas de operações psicológicas tradicionais. Contudo, os efeitos tanto da disponibilidade comercial das TICs com capacidade de “deep learning”, quanto da estruturação baseada em conhecimento permitida pela ubiquidade e centralidade econômica dessas tecnologias, tornam o conjunto de mecanismos analisados num fenômeno que merece uma conceptualização e marco investigativo únicos. A obra é uma contribuição a esse empreendimento.This is an analysis of the ICT-based mechanisms involved in the articulation of lifeworlds that are strategically oriented to foster, prevent or undermine the development of psycho-cognitive conditions adequate for the construction or sustainability of an authority’s or a political action’s rational legitimacy. While grounding theory to a historical context, the application of Foucauldian “archeological” instruments to the study of the political narratives giving birth and springing from “Russiagate” also served to validate the premised convergence and incorporation of common agenda-setting trends and practices typical of traditional psychological operations. However, the effects of both the commercial availability of deep-learning ICTs and the cognition-based structuration afforded by their ubiquity and economic centrality set this “dispositif” apart, thereby deserving a unique conceptualization and research framework. This study is a contribution to such endeavor

    Measuring for privacy: From tracking to cloaking

    Get PDF
    We rely on various types of online services to access information for different uses, and often provide sensitive information during the interactions with these services. These online services are of different types; e.g. commercial websites (e.g., banking, education, news, shopping, dating, social media), essential websites (e.g., government). Online services are available through websites as well as mobile apps. The growth of web sites, mobile devices and apps that run on those devices, have resulted in the proliferation of online services. This whole ecosystem of online services had created an environment where everyone using it are being tracked. Several past studies have performed privacy measurements to assess the prevalence of tracking in online services. Most of these studies used institutional (i.e., non-residential) resources for their measurements, and lacked global perspective. Tracking on online services and its impact to privacy may differ at various locations. Therefore, to fill in this gap, we perform a privacy measurement study of popular commercial websites, using residential networks from various locations. Unlike commercial online services, there are different categories (e.g., government, hospital, religion) of essential online services where users do not expect to be tracked. The users of these essential online services often use information of extreme personal and sensitive in nature (e.g., social insurance number, health information, prayer requests/confessions made to a religious minister) when interacting with those services. However, contrary to the expectations of users, these essential services include user tracking capabilities. We built frameworks to perform privacy measurements of these online services (include both web sites and Android apps) that are of different types (i.e., governments, hospitals and religious services in jurisdictions around the world). The instrumented tracking metrics (i.e., stateless, stateful, session replaying) from the privacy measurements of these online services are then analyzed. Malicious sites (e.g., phishing) mimic online services to deceive users, causing them harm. We found 80% of analyzed malicious sites are cloaked, and not blocked by search engine crawlers. Therefore, sensitive information collected from users through these sites is exposed. In addition, underlying Internet-connected infrastructure (e.g., networked devices such as routers, modems) used by online users, can suffer from security issues due to nonuse of TLS or use of weak SSL/TLS certificates. Such security issues (e.g., spying on a CCTV camera) can compromise data integrity, confidentiality and user privacy. Overall, we found tracking on commercial websites differ based on the location of corresponding residential users. We also observed widespread use of tracking by commercial trackers, and session replay services that expose sensitive information from essential online services. Sensitive information are also exposed due to vulnerabilities in online services (e.g., Cross Site Scripting). Furthermore, a significant proportion of malicious sites evade detection by security/search engine crawlers, which may make such sites readily available to users. We also detect weaknesses in the TLS ecosystem of Internet-connected infrastructure that supports running these online services. These observations require more research on privacy of online services, as well as information exposure from malicious online services, to understand the significance of privacy issues, and to adopt appropriate mitigation strategies
    corecore