2,224 research outputs found

    Tracking User Behavior with Google Analytics Events on an Academic Library Website

    Get PDF
    The primary purpose of an academic library website is to serve as a portal to library-acquired content. Navigational design of a library website affects the user’s ability to find and access content. At Albertsons Library, the goal of the navigational design of the website is to mimic user behavior on the website to help them access information and articles from over 300 different library vendors. Coordinating with different vendors makes tracking the navigational flow of user behavior difficult with the tool Google Analytics. Using the events feature in Google Analytics, the team responsible for web design was able to track user flow, and was able to quantify how many users were actual “drop-offs” versus those that were clicks into library resources. Decisions made after acquiring this data resulted in a website with a 10 percent or less bounce rate, and decreased the number of clicks required for users accessing the library\u27s content

    Web Mining for Web Personalization

    Get PDF
    Web personalization is the process of customizing a Web site to the needs of specific users, taking advantage of the knowledge acquired from the analysis of the user\u27s navigational behavior (usage data) in correlation with other information collected in the Web context, namely, structure, content, and user profile data. Due to the explosive growth of the Web, the domain of Web personalization has gained great momentum both in the research and commercial areas. In this article we present a survey of the use of Web mining for Web personalization. More specifically, we introduce the modules that comprise a Web personalization system, emphasizing the Web usage mining module. A review of the most common methods that are used as well as technical issues that occur is given, along with a brief overview of the most popular tools and applications available from software vendors. Moreover, the most important research initiatives in the Web usage mining and personalization areas are presented

    How does varying the number of personas affect user perceptions and behavior? Challenging the ‘small personas’ hypothesis!

    Get PDF
    Studies in human-computer interaction recommend creating fewer than ten personas, based on stakeholders’ limitations to cognitively process and use personas. However, no existing studies offer empirical support for having fewer rather than more personas. Investigating this matter, thirty-seven participants interacted with five and fifteen personas using an interactive persona system, choosing one persona to design for. Our study results from eye-tracking and survey data suggest that when using interactive persona systems, the number of personas can be increased from the conventionally suggested ‘less than ten’, without significant negative effects on user perceptions or task performance, and with the positive effects of increasing engagement with the personas, having a more diverse representation of the end-user population, as well as users accessing personas from more varied demographic groups for a design task. Using the interactive persona system, users adjusted their information processing style by spending less time on each persona when presented with fifteen personas, while still absorbing a similar amount of information than with five personas, implying that more efficient information processing strategies are applied with more personas. The results highlight the importance of designing interactive persona systems to support users’ browsing of more personas.© 2022 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Analyzing the Tagging Quality of the Spanish OpenStreetMap

    Get PDF
    In this paper, a framework for the assessment of the quality of OpenStreetMap is presented, comprising a batch of methods to analyze the quality of entity tagging. The approach uses Taginfo as a reference base and analyses quality measures such as completeness, compliance, consistence, granularity, richness and trust . The framework has been used to analyze the quality of OpenStreetMap in Spain, comparing the main cities of Spain. Also a comparison between Spain and some major European cities has been carried out. Additionally, a Web tool has been also developed in order to facilitate the same kind of analysis in any area of the world

    User Acquisition and Engagement in Digital News Media

    Get PDF
    Generating revenue has been a major issue for the news industry and journalism over the past decade. In fact, vast availability of free online news sources causes online news media agencies to face user acquisition and engagement as pressing issues more than before. Although digital news media agencies are seeking sustainable relationships with their users, their current business models do not satisfy this demand. As a matter of fact, they need to understand and predict how much an article can engage a reader as a crucial step in attracting readers, and then maximize the engagement using some strategies. Moreover, news media companies need effective algorithmic tools to identify users who are prone to subscription. Last but not least, online news agencies need to make smarter decisions in the way that they deliver articles to users to maximize the potential benefits. In this dissertation, we take the first steps towards achieving these goals and investigate these challenges from data mining /machine learning perspectives. First, we investigate the problem of understanding and predicting article engagement in terms of dwell time as one of the most important factors in digital news media. In particular, we design data exploratory models studying the textual elements (e.g., events, emotions) involved in article stories, and find their relationships with the engagement patterns. In the prediction task, we design a framework to predict the article dwell time based on a deep neural network architecture which exploits the interactions among important elements (i.e., augmented features) in the article content as well as the neural representation of the content to achieve the better performance. In the second part of the dissertation, we address the problem of identifying valuable visitors who are likely to subscribe in the future. We suggest that the decision for subscription is not a sudden, instantaneous action, but it is the informed decision based on positive experience with the newspaper. As such, we propose effective engagement measures and show that they are effective in building the predictive model for subscription. We design a model that predicts not only the potential subscribers but also the time that a user would subscribe. In the last part of this thesis, we consider the paywall problem in online newspapers. The traditional paywall method offers a non-subscribed reader a fixed number of free articles in a period of time (e.g., a month), and then directs the user to the subscription page for further reading. We argue that there is no direct relationship between the number of paywalls presented to readers and the number of subscriptions, and that this artificial barrier, if not used well, may disengage potential subscribers and thus may not well serve its purpose of increasing revenue. We propose an adaptive paywall mechanism to balance the benefit of showing an article against that of displaying the paywall (i.e., terminating the session). We first define the notion of cost and utility that are used to define an objective function for optimal paywall decision making. Then, we model the problem as a stochastic sequential decision process. Finally, we propose an efficient policy function for paywall decision making. All the proposed models are evaluated on real datasets from The Globe and Mail which is a major newspaper in Canada. However, the proposed techniques are not limited to any particular dataset or strict requirement. Alternatively, they are designed based on the datasets and settings which are available and common to most of newspapers. Therefore, the models are general and can be applied by any online newspaper to improve user engagement and acquisition

    GENERATING CONSUMER INSIGHTS FROM BIG DATA CLICKSTREAM INFORMATION AND THE LINK WITH TRANSACTION-RELATED SHOPPING BEHAVIOR

    Get PDF
    E-Commerce firms collect enormous amounts of information in their databases. Yet, only a fraction is used to improve business processes and decision-making, while many useful sources often remain underexplored. Therefore, we propose a new and interdisciplinary method to identify goals of consumers and develop an online shopping typology. We use k-means clustering and non-parametric analysis of variance tests to categorize search patterns as Buying, Searching, Browsing or Bouncing. Adding to purchase decision-making theory we propose that the use of off-site clickstream data—the sequence of consumers’ advertising channel clicks to a firm’s website—can significantly enhance the understand-ing of shopping motivation and transaction-related behavior, even before entering the website. To run our consumer data analytics we use a unique and extensive dataset from a large European apparel company with over 80 million clicks covering 11 online advertising channels. Our results show that consumers with higher goal-direction have significantly higher purchase propensities, and against our expectations - consumers with higher levels of shopping involvement show higher return rates. Our conceptual approach and insights contribute to theory and practice alike such that it may help to improve real-time decision-making in marketing analytics to substantially enhance the customer experience online

    Understanding the Privacy Risks of Popular Search Engine Advertising Systems

    Full text link
    We present the first extensive measurement of the privacy properties of the advertising systems used by privacy-focused search engines. We propose an automated methodology to study the impact of clicking on search ads on three popular private search engines which have advertising-based business models: StartPage, Qwant, and DuckDuckGo, and we compare them to two dominant data-harvesting ones: Google and Bing. We investigate the possibility of third parties tracking users when clicking on ads by analyzing first-party storage, redirection domain paths, and requests sent before, when, and after the clicks. Our results show that privacy-focused search engines fail to protect users' privacy when clicking ads. Users' requests are sent through redirectors on 4% of ad clicks on Bing, 86% of ad clicks on Qwant, and 100% of ad clicks on Google, DuckDuckGo, and StartPage. Even worse, advertising systems collude with advertisers across all search engines by passing unique IDs to advertisers in most ad clicks. These IDs allow redirectors to aggregate users' activity on ads' destination websites in addition to the activity they record when users are redirected through them. Overall, we observe that both privacy-focused and traditional search engines engage in privacy-harming behaviors allowing cross-site tracking, even in privacy-enhanced browsers
    • 

    corecore