663 research outputs found
Mining Butterflies in Streaming Graphs
This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection.
sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals
SAR-to-Optical Image Translation via Thermodynamics-inspired Network
Synthetic aperture radar (SAR) is prevalent in the remote sensing field but
is difficult to interpret in human visual perception. Recently, SAR-to-optical
(S2O) image conversion methods have provided a prospective solution for
interpretation. However, since there is a huge domain difference between
optical and SAR images, they suffer from low image quality and geometric
distortion in the produced optical images. Motivated by the analogy between
pixels during the S2O image translation and molecules in a heat field,
Thermodynamics-inspired Network for SAR-to-Optical Image Translation (S2O-TDN)
is proposed in this paper. Specifically, we design a Third-order Finite
Difference (TFD) residual structure in light of the TFD equation of
thermodynamics, which allows us to efficiently extract inter-domain invariant
features and facilitate the learning of the nonlinear translation mapping. In
addition, we exploit the first law of thermodynamics (FLT) to devise an
FLT-guided branch that promotes the state transition of the feature values from
the unstable diffusion state to the stable one, aiming to regularize the
feature diffusion and preserve image structures during S2O image translation.
S2O-TDN follows an explicit design principle derived from thermodynamic theory
and enjoys the advantage of explainability. Experiments on the public SEN1-2
dataset show the advantages of the proposed S2O-TDN over the current methods
with more delicate textures and higher quantitative results
Artificial Intelligence and International Conflict in Cyberspace
This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation
Meta-optimized Contrastive Learning for Sequential Recommendation
Contrastive Learning (CL) performances as a rising approach to address the
challenge of sparse and noisy recommendation data. Although having achieved
promising results, most existing CL methods only perform either hand-crafted
data or model augmentation for generating contrastive pairs to find a proper
augmentation operation for different datasets, which makes the model hard to
generalize. Additionally, since insufficient input data may lead the encoder to
learn collapsed embeddings, these CL methods expect a relatively large number
of training data (e.g., large batch size or memory bank) to contrast. However,
not all contrastive pairs are always informative and discriminative enough for
the training processing. Therefore, a more general CL-based recommendation
model called Meta-optimized Contrastive Learning for sequential Recommendation
(MCLRec) is proposed in this work. By applying both data augmentation and
learnable model augmentation operations, this work innovates the standard CL
framework by contrasting data and model augmented views for adaptively
capturing the informative features hidden in stochastic data augmentation.
Moreover, MCLRec utilizes a meta-learning manner to guide the updating of the
model augmenters, which helps to improve the quality of contrastive pairs
without enlarging the amount of input data. Finally, a contrastive
regularization term is considered to encourage the augmentation model to
generate more informative augmented views and avoid too similar contrastive
pairs within the meta updating. The experimental results on commonly used
datasets validate the effectiveness of MCLRec.Comment: 11 Pages,8 figure
A GPT-Based Approach for Scientometric Analysis: Exploring the Landscape of Artificial Intelligence Research
This study presents a comprehensive approach that addresses the challenges of
scientometric analysis in the rapidly evolving field of Artificial Intelligence
(AI). By combining search terms related to AI with the advanced language
processing capabilities of generative pre-trained transformers (GPT), we
developed a highly accurate method for identifying and analyzing AI-related
articles in the Web of Science (WoS) database. Our multi-step approach included
filtering articles based on WoS citation topics, category, keyword screening,
and GPT classification. We evaluated the effectiveness of our method through
precision and recall calculations, finding that our combined approach captured
around 94% of AI-related articles in the entire WoS corpus with a precision of
90%. Following this, we analyzed the publication volume trends, revealing a
continuous growth pattern from 2013 to 2022 and an increasing degree of
interdisciplinarity. We conducted citation analysis on the top countries and
institutions and identified common research themes using keyword analysis and
GPT. This study demonstrates the potential of our approach to facilitate
accurate scientometric analysis, by providing insights into the growth,
interdisciplinary nature, and key players in the field.Comment: 29 pages, 10 figures, 5 table
Toward Sustainable Recommendation Systems
Recommendation systems are ubiquitous, acting as an essential component in online platforms to help users discover items of interest. For example, streaming services rely on recommendation systems to serve high-quality informational and entertaining content to their users, and e-commerce platforms recommend interesting items to assist customers in making shopping decisions. Further-more, the algorithms and frameworks driving recommendation systems provide the foundation for new personalized machine learning methods that have wide-ranging impacts.
While successful, many current recommendation systems are fundamentally not sustainable: they focus on short-lived engagement objectives, requiring constant fine-tuning to adapt to the dynamics of evolving systems, or are subject to performance degradation as users and items churn in the system. In this dissertation research, we seek to lay the foundations for a new class of sustainable recommendation systems. By sustainable, we mean a recommendation system should be fundamentally long-lived, while enhancing both current and future potential to connect users with interesting content. By building such sustainable recommendation systems, we can continuously improve the user experience and provide a long-lived foundation for ongoing engagement. Building on a large body of work in recommendation systems, with the advance in graph neural networks, and with recent success in meta-learning for ML-based models, this dissertation focuses on sustainability in recommendation systems from the following three perspectives with corresponding contributions:
• Adaptivity: The first contribution lies in capturing the temporal effects from the instant shifting of users’ preferences to the lifelong evolution of users and items in real-world scenarios, leading to models which are highly adaptive to the temporal dynamics present in online platforms and provide improved item recommendation at different timestamps.
• Resilience: Secondly, we seek to identify the elite users who act as the “backbone” recommendation systems shape the opinions of other users via their public activities. By investigating the correlation between user’s preference on item consumption and their connections to the “backbone”, we enable recommendation models to be resilient to dramatic changes including churn in new items and users, and frequently updated connections between users in online communities.
• Robustness: Finally, we explore the design of a novel framework for “learning-to-adapt” to the imperfect test cases in recommendation systems ranging from cold-start users with few interactions to casual users with low activity levels. Such a model is robust to the imperfection in real-world environments, resulting in reliable recommendation to meet user needs and aspirations
Understanding User Intent Modeling for Conversational Recommender Systems: A Systematic Literature Review
Context: User intent modeling is a crucial process in Natural Language
Processing that aims to identify the underlying purpose behind a user's
request, enabling personalized responses. With a vast array of approaches
introduced in the literature (over 13,000 papers in the last decade),
understanding the related concepts and commonly used models in AI-based systems
is essential. Method: We conducted a systematic literature review to gather
data on models typically employed in designing conversational recommender
systems. From the collected data, we developed a decision model to assist
researchers in selecting the most suitable models for their systems.
Additionally, we performed two case studies to evaluate the effectiveness of
our proposed decision model. Results: Our study analyzed 59 distinct models and
identified 74 commonly used features. We provided insights into potential model
combinations, trends in model selection, quality concerns, evaluation measures,
and frequently used datasets for training and evaluating these models.
Contribution: Our study contributes practical insights and a comprehensive
understanding of user intent modeling, empowering the development of more
effective and personalized conversational recommender systems. With the
Conversational Recommender System, researchers can perform a more systematic
and efficient assessment of fitting intent modeling frameworks
Recommended from our members
Ensemble deep clustering analysis for time window determination of event-related potentials
Data availability:
Data will be made available on request.Copyright © 2023 The Authors. Objective:
Cluster analysis of spatio-temporal event-related potential (ERP) data is a promising tool for exploring the measurement time window of ERPs. However, even after preprocessing, the remaining noise can result in uncertain cluster maps followed by unreliable time windows while clustering via conventional clustering methods.
Methods:
We designed an ensemble deep clustering pipeline to determine a reliable time window for the ERP of interest from temporal concatenated grand average ERP data. The proposed pipeline includes semi-supervised deep clustering methods initialized by consensus clustering and unsupervised deep clustering methods with end-to-end architectures. Ensemble clustering from those deep clusterings was used by the designed adaptive time window determination to estimate the time window.
Results:
After applying simulated and real ERP data, our method successfully obtained the time window for identifying the P3 components (as the interest of both ERP studies) while additional noise (e.g., adding 20 dB to −5 dB white Gaussian noise) was added to the prepared data.
Conclusion:
Compared to the state-of-the-art clustering methods, a superior clustering performance was yielded from both ERP data. Furthermore, more stable and precise time windows were elicited as the noise increased.
Significance:
Our study provides a complementary understanding of identifying the cognitive process using deep clustering analysis to the existing studies. Our finding suggests that deep clustering can be used to identify the ERP of interest when the data is imperfect after preprocessing
- …