85 research outputs found

    Why Do Cascade Sizes Follow a Power-Law?

    Full text link
    We introduce random directed acyclic graph and use it to model the information diffusion network. Subsequently, we analyze the cascade generation model (CGM) introduced by Leskovec et al. [19]. Until now only empirical studies of this model were done. In this paper, we present the first theoretical proof that the sizes of cascades generated by the CGM follow the power-law distribution, which is consistent with multiple empirical analysis of the large social networks. We compared the assumptions of our model with the Twitter social network and tested the goodness of approximation.Comment: 8 pages, 7 figures, accepted to WWW 201

    Mining HCI Data for Theory of Mind Induction

    Get PDF
    Human-computer interaction (HCI) results in enormous amounts of data-bearing potentials for understanding a human user’s intentions, goals, and desires. Knowing what users want and need is a key to intelligent system assistance. The theory of mind concept known from studies in animal behavior is adopted and adapted for expressive user modeling. Theories of mind are hypothetical user models representing, to some extent, a human user’s thoughts. A theory of mind may even reveal tacit knowledge. In this way, user modeling becomes knowledge discovery going beyond the human’s knowledge and covering domain-specific insights. Theories of mind are induced by mining HCI data. Data mining turns out to be inductive modeling. Intelligent assistant systems inductively modeling a human user’s intentions, goals, and the like, as well as domain knowledge are, by nature, learning systems. To cope with the risk of getting it wrong, learning systems are equipped with the skill of reflection

    Bridging Offline-Online Evaluation with a Time-dependent and Popularity Bias-free Offline Metric for Recommenders

    Full text link
    The evaluation of recommendation systems is a complex task. The offline and online evaluation metrics for recommender systems are ambiguous in their true objectives. The majority of recently published papers benchmark their methods using ill-posed offline evaluation methodology that often fails to predict true online performance. Because of this, the impact that academic research has on the industry is reduced. The aim of our research is to investigate and compare the online performance of offline evaluation metrics. We show that penalizing popular items and considering the time of transactions during the evaluation significantly improves our ability to choose the best recommendation model for a live recommender system. Our results, averaged over five large-size real-world live data procured from recommenders, aim to help the academic community to understand better offline evaluation and optimization criteria that are more relevant for real applications of recommender systems.Comment: Accepted to evalRS 2023@KD

    Detecting Deceptive Dark-Pattern Web Advertisements for Blind Screen-Reader Users

    Get PDF
    Advertisements have become commonplace on modern websites. While ads are typically designed for visual consumption, it is unclear how they affect blind users who interact with the ads using a screen reader. Existing research studies on non-visual web interaction predominantly focus on general web browsing; the specific impact of extraneous ad content on blind users\u27 experience remains largely unexplored. To fill this gap, we conducted an interview study with 18 blind participants; we found that blind users are often deceived by ads that contextually blend in with the surrounding web page content. While ad blockers can address this problem via a blanket filtering operation, many websites are increasingly denying access if an ad blocker is active. Moreover, ad blockers often do not filter out internal ads injected by the websites themselves. Therefore, we devised an algorithm to automatically identify contextually deceptive ads on a web page. Specifically, we built a detection model that leverages a multi-modal combination of handcrafted and automatically extracted features to determine if a particular ad is contextually deceptive. Evaluations of the model on a representative test dataset and \u27in-the-wild\u27 random websites yielded F1 scores of 0.86 and 0.88, respectively

    Review of Web Mapping: Eras, Trends and Directions

    Get PDF
    Web mapping and the use of geospatial information online have evolved rapidly over the past few decades. Almost everyone in the world uses mapping information, whether or not one realizes it. Almost every mobile phone now has location services and every event and object on the earth has a location. The use of this geospatial location data has expanded rapidly, thanks to the development of the Internet. Huge volumes of geospatial data are available and daily being captured online, and are used in web applications and maps for viewing, analysis, modeling and simulation. This paper reviews the developments of web mapping from the first static online map images to the current highly interactive, multi-sourced web mapping services that have been increasingly moved to cloud computing platforms. The whole environment of web mapping captures the integration and interaction between three components found online, namely, geospatial information, people and functionality. In this paper, the trends and interactions among these components are identified and reviewed in relation to the technology developments. The review then concludes by exploring some of the opportunities and directions

    Auditing Symposium XIII: Proceedings of the 1996 Deloitte & Touche/University of Kansas Symposium on Auditing Problems

    Get PDF
    Meeting the challenge of technological change -- A standard setter\u27s perspective / James M. Sylph, Gregory P. Shields; Technological change -- A glass half empty or a glass half full: Discussion of Meeting the challenge of technological change, and Business and auditing impacts of new technologies / Urton Anderson; Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Richard Lea; Model of errors and irregularities as a general framework for risk-based audit planning / Jere R. Francis, Richard A. Grimlund; Discussion of A Model of errors and irregularities as a general framework for risk-based audit planning / Timothy B. Bell; Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / Karla M. Johnstone, Stanley F. Biggs, Jean C. Bedard; Discussant\u27s comments on Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / David Plumlee; Implementation and acceptance of expert systems by auditors / Maureen McGowan; Discussion of Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Katherine Schipper; CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Miklos A. Vasarhelyi; Discussant comments on The CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Eric Denna; Digital analysis and the reduction of auditor litigation risk / Mark Nigrini; Discussion of Digital analysis and the reduction of auditor litigation risk / James E. Searing; Institute of Internal Auditors: Business and auditing impacts of new technologies / Charles H. Le Grandhttps://egrove.olemiss.edu/dl_proceedings/1012/thumbnail.jp

    Pangea: An MLOps Tool for Automatically Generating Infrastructure and Deploying Analytic Pipelines in Edge, Fog and Cloud Layers

    Get PDF
    Development and operations (DevOps), artificial intelligence (AI), big data and edge–fog–cloud are disruptive technologies that may produce a radical transformation of the industry. Nevertheless, there are still major challenges to efficiently applying them in order to optimise productivity. Some of them are addressed in this article, concretely, with respect to the adequate management of information technology (IT) infrastructures for automated analysis processes in critical fields such as the mining industry. In this area, this paper presents a tool called Pangea aimed at automatically generating suitable execution environments for deploying analytic pipelines. These pipelines are decomposed into various steps to execute each one in the most suitable environment (edge, fog, cloud or on-premise) minimising latency and optimising the use of both hardware and software resources. Pangea is focused in three distinct objectives: (1) generating the required infrastructure if it does not previously exist; (2) provisioning it with the necessary requirements to run the pipelines (i.e., configuring each host operative system and software, install dependencies and download the code to execute); and (3) deploying the pipelines. In order to facilitate the use of the architecture, a representational state transfer application programming interface (REST API) is defined to interact with it. Therefore, in turn, a web client is proposed. Finally, it is worth noting that in addition to the production mode, a local development environment can be generated for testing and benchmarking purposes.This research has been funded in the context of the IlluMINEation project, from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 869379
    • …
    corecore