416 research outputs found

    Decoding the Real World: Tackling Virtual Ethnographic Challenges through Data-Driven Methods

    Get PDF

    International environmental cooperation and climate change laws: A quantitative analysis

    Get PDF
    The increasing number of IEAs has induced a complex web of interdependent relationships among countries. This thesis mainly studies the international environmental cooperation network created by IEAs and countries’ adoption of national climate change laws by combing theories and methods from network science, economic and political economics and international relations. Specifically, I will outline four projects concerned with IEAs and climate change laws. In the first project, I construct a statistically significant international environmental cooperation network among countries and study its emergency and evolution by investigating its structural properties. The results reveal that the popularity of environmental agreements led to the emergence of an environmental cooperation network and document how collaboration is accelerating. The second and third projects concern the meso-organisation of international environmental cooperation. Specifically, the second project studies the community structure of the environmental cooperation network. Community detection is conducted, and results show that environmental cooperation presents regionalisation. In the third project, I study the core-periphery structure of international environmental cooperation by investigating the nestedness and rich clubs arising from country-treaty relationships. Furthermore, the cooperation complexity is analysed based on methods from economic complexity to further assess country-treaty relationships. I develop a new measure to quantify the diversification of countries’ commitment to environmental treaties. Results show that European countries lie at the core of international environmental cooperation with the highest diversification of commitment. In addition, countries’ diversification of commitment is significantly iv correlated with environmental performance within countries. In the fourth project, I turn to national climate change laws to explore factors influencing the burst of countries’ adoption behaviours. I show that scientific consensus, COPs, and natural disasters are significantly and positively associated with the burst of countries’ adoption behaviours

    Theoretical and Methodological Bases for Analysis of the Influence of Opinion Leaders in the Online Environment

    Full text link
    Статья поступила в редакцию 26.04.2023 г.Основной целью статьи является поиск возможностей применения классических и постклассических моделей анализа социальных сетей к исследованию лидерства. В современных условиях лидеры мнений становятся стратегически значимы, располагая возможностями как локального и дисперсного объединения сторонников для достижения социально значимых целей, так и усугубления рисков деструктивной асоциальной консолидации. Новизна предлагаемого материала состоит в том, что в нем представлены комплексная аналитическая концептуализация влияния лидеров мнений социальных сетей, а также технологии идентификации и управляющего воздействия в различных сферах современного цифрового общества.The main purpose of the article is to search for the possibilities of applying classical and postclassical models of social network analysis to leadership research. In modern conditions, opinion leaders become strategically important, having the capabilities of both local and dispersed association of supporters to achieve socially significant goals, and aggravating the risks of destructive antisocial consolidation. The novelty of the proposed study is due to the fact that it presents a comprehensive analytical conceptualization of the influence of social network opinion leaders, as well as identification technology and control in various areas of modern digital society

    Information consistency as response to pre-launch advertising communications: The case of YouTube trailers

    Get PDF
    IntroductionPre-launch advertising communications are critical for the early adoption of experiential products. Often, companies release a variety of advertising messages for the same product, which results in a lack of information consistency. Research on the effect of advertising communications with different message content is scarce. Further, most studies on information consistency rely on experimental methods, leaving the actual effect of consumer response on product adoption unknown.MethodsTreating online comments to movie trailers as consumer response to advertising communication, we propose a natural language processing methodology to measure information consistency. We validate our measurement through an online experiment and test it on 1.3 million YouTube comments.ResultsOur empirical results provide evidence that information consistency driven by trailer-viewing is a key driver of opening box office success.DiscussionInsights deriving from this study are important to marketing communications research, especially in contexts where early product adoption is critical

    Modeling pattern formation in communities by using information particles

    Full text link
    Understanding the pattern formation in communities has been at the center of attention in various fields. Here we introduce a novel model, called an "information-particle model," which is based on the reaction-diffusion model and the distributed behavior model. The information particle drives competition or coordination among species. Therefore, a traverse of information particles in a social system makes it possible to express four different classes of patterns (i.e. "stationary", "competitive-equilibrium", "chaotic", and "periodic"). Remarkably, "competitive equilibrium" well expresses the complex dynamics that is equilibrium macroscopically and non-equilibrium microscopically. Although it is a fundamental phenomenon in pattern formation in nature, it has not been obtained by conventional models. Furthermore, the pattern transitions across the classes depending only on parameters of system, namely, the number of species (vertices in network) and distance (edges) between species. It means that one information-particle model successfully develops the patterns with an in-situ computation under various environments.Comment: 12 pages and 6 figure

    Reconstructing Graph Diffusion History from a Single Snapshot

    Full text link
    Diffusion on graphs is ubiquitous with numerous high-impact applications. In these applications, complete diffusion histories play an essential role in terms of identifying dynamical patterns, reflecting on precaution actions, and forecasting intervention effects. Despite their importance, complete diffusion histories are rarely available and are highly challenging to reconstruct due to ill-posedness, explosive search space, and scarcity of training data. To date, few methods exist for diffusion history reconstruction. They are exclusively based on the maximum likelihood estimation (MLE) formulation and require to know true diffusion parameters. In this paper, we study an even harder problem, namely reconstructing Diffusion history from A single SnapsHot} (DASH), where we seek to reconstruct the history from only the final snapshot without knowing true diffusion parameters. We start with theoretical analyses that reveal a fundamental limitation of the MLE formulation. We prove: (a) estimation error of diffusion parameters is unavoidable due to NP-hardness of diffusion parameter estimation, and (b) the MLE formulation is sensitive to estimation error of diffusion parameters. To overcome the inherent limitation of the MLE formulation, we propose a novel barycenter formulation: finding the barycenter of the posterior distribution of histories, which is provably stable against the estimation error of diffusion parameters. We further develop an effective solver named DIffusion hiTting Times with Optimal proposal (DITTO) by reducing the problem to estimating posterior expected hitting times via the Metropolis--Hastings Markov chain Monte Carlo method (M--H MCMC) and employing an unsupervised graph neural network to learn an optimal proposal to accelerate the convergence of M--H MCMC. We conduct extensive experiments to demonstrate the efficacy of the proposed method.Comment: Full version of the KDD 2023 paper. Our code is available at https://github.com/q-rz/KDD23-DITT

    SAILOR: Structural Augmentation Based Tail Node Representation Learning

    Full text link
    Graph Neural Networks (GNNs) have achieved state-of-the-art performance in representation learning for graphs recently. However, the effectiveness of GNNs, which capitalize on the key operation of message propagation, highly depends on the quality of the topology structure. Most of the graphs in real-world scenarios follow a long-tailed distribution on their node degrees, that is, a vast majority of the nodes in the graph are tail nodes with only a few connected edges. GNNs produce inferior node representations for tail nodes since they lack structural information. In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes. Extensive experiments on public benchmark datasets demonstrate that SAILOR can significantly improve the tail node representations and outperform the state-of-the-art baselines.Comment: Accepted by CIKM 2023; Code is available at https://github.com/Jie-Re/SAILO

    Vorhersage der Aktualisierungen auf Social Media Plattformen

    Get PDF
    Social Media Plattformen wie Facebook, Twitter und YouTube sind nicht nur bei Endbenutzern, sondern auch bei Unternehmen seit Jahren sehr beliebt. Unternehmen nutzen diese Plattformen insbesondere für Marketingzwecke, womit herkömmliche Marketinginstrumente zunehmend in den Hintergrund rücken. Neben Unternehmen verwenden auch politische Parteien, Universitäten, Forschungseinrichtungen und viele weitere Organisationen die Möglichkeiten von Social Media für ihre Belange. Das große Interesse von Endbenutzern und Institutionen an Social Media macht es interessant für viele Anwendungen in Wirtschaft und Wissenschaft. Um Marktbeobachtung und Forschung zu Social Media zu betreiben, werden Daten benötigt, die meist über dedizierte Werkzeuge erhoben und ausgewertet werden, wobei die Einschränkungen vorhandener technischer Schnittstellen der Social Media Plattformen zu beachten sind. Für ausgewählte Forschungsfragen sind Aspekte wie Umfang und Aktualität der Daten von besonderer Bedeutung. Ein Abfragen von Aktualisierungen aus den Social Media Plattformen kann mit heute verfügbaren Mitteln nur über Polling-Verfahren durchgeführt werden. Zum Berechnen der Aktualisierungsintervalle nutzt man häufig statistische Modelle. Das Ziel der vorliegenden Arbeit ist es, geeignete Zeitpunkte zum Abruf vorgegebener Feeds auf Social Media Plattformen zu bestimmen, um neue Beiträge zeitnah abzurufen und zu verarbeiten. Die Berechnung geeigneter Aktualisierungszeitpunkte dient der Optimierung des Ressourceneinsatzes und einer Reduktion der Verzögerung der Verarbeitung. Viele Anwendungen können davon profitieren. Die vorliegende Arbeit leistet mehrere Beiträge im Hinblick auf die Zielsetzung. Zunächst wurden Arbeiten zu Social Media und angrenzenden Datenquellen im Umfeld des World Wide Web, welche die Bestimmung von Änderungsraten oder die Vorhersage von Aktualisierungen verfolgen, auf die eigene Problemstellung übertragen. Ferner wurde die Eignung der Algorithmen zur Vorhersage der Aktualisierungszeitpunkte aus bestehenden Ansätzen mithilfe quantitativer Messungen bestimmt. Die Ansätze wurden dazu auf reale Daten aus Facebook, Twitter und YouTube angewendet und mithilfe geeigneter Metriken evaluiert. Die gewonnenen Erkenntnisse zeigen, dass die Qualität der Vorhersagen wesentlich von der Wahl des Algorithmus abhängt. Hierbei konnte eine Forschungslücke im Hinblick auf die Auswahl geeigneter Algorithmen identifiziert werden, da diese nach bisherigen Erkenntnissen üblicherweise nur manuell oder nach statischen Regeln erfolgt. Ein eigener Ansatz zur Vorhersage bildet den Kern der Arbeit und bezieht die individuellen Aktualisierungsmuster bestehender Social Media Feeds ein, um für neue Feeds die geeigneten Algorithmen zur Vorhersage, mit passender Parametrisierung, auszuwählen. Entsprechend den Ergebnissen der Evaluation wird gegenüber dem Stand der Technik eine höhere Qualität der Vorhersagen bei gleichzeitiger Reduktion des Aufwands für die Auswahl erreicht

    Information Retention in the Multi-platform Sharing of Science

    Full text link
    The public interest in accurate scientific communication, underscored by recent public health crises, highlights how content often loses critical pieces of information as it spreads online. However, multi-platform analyses of this phenomenon remain limited due to challenges in data collection. Collecting mentions of research tracked by Altmetric LLC, we examine information retention in the over 4 million online posts referencing 9,765 of the most-mentioned scientific articles across blog sites, Facebook, news sites, Twitter, and Wikipedia. To do so, we present a burst-based framework for examining online discussions about science over time and across different platforms. To measure information retention we develop a keyword-based computational measure comparing an online post to the scientific article's abstract. We evaluate our measure using ground truth data labeled by within field experts. We highlight three main findings: first, we find a strong tendency towards low levels of information retention, following a distinct trajectory of loss except when bursts of attention begin in social media. Second, platforms show significant differences in information retention. Third, sequences involving more platforms tend to be associated with higher information retention. These findings highlight a strong tendency towards information loss over time - posing a critical concern for researchers, policymakers, and citizens alike - but suggest that multi-platform discussions may improve information retention overall.Comment: 12 pages, 8 figures, accepted at the International AAAI Conference on Web and Social Media (ICWSM, 2023

    Distances and component sizes in scale-free random graphs

    Get PDF
    corecore