173 research outputs found
Collaborative Recommendation Model Based on Multi-modal Multi-view Attention Network: Movie and literature cases
The existing collaborative recommendation models that use multi-modal
information emphasize the representation of users' preferences but easily
ignore the representation of users' dislikes. Nevertheless, modelling users'
dislikes facilitates comprehensively characterizing user profiles. Thus, the
representation of users' dislikes should be integrated into the user modelling
when we construct a collaborative recommendation model. In this paper, we
propose a novel Collaborative Recommendation Model based on Multi-modal
multi-view Attention Network (CRMMAN), in which the users are represented from
both preference and dislike views. Specifically, the users' historical
interactions are divided into positive and negative interactions, used to model
the user's preference and dislike views, respectively. Furthermore, the
semantic and structural information extracted from the scene is employed to
enrich the item representation. We validate CRMMAN by designing contrast
experiments based on two benchmark MovieLens-1M and Book-Crossing datasets.
Movielens-1m has about a million ratings, and Book-Crossing has about 300,000
ratings. Compared with the state-of-the-art knowledge-graph-based and
multi-modal recommendation methods, the AUC, NDCG@5 and NDCG@10 are improved by
2.08%, 2.20% and 2.26% on average of two datasets. We also conduct controlled
experiments to explore the effects of multi-modal information and multi-view
mechanism. The experimental results show that both of them enhance the model's
performance
Unifying Gradients to Improve Real-world Robustness for Deep Networks
The wide application of deep neural networks (DNNs) demands an increasing
amount of attention to their real-world robustness, i.e., whether a DNN resists
black-box adversarial attacks, among which score-based query attacks (SQAs) are
most threatening since they can effectively hurt a victim network with the only
access to model outputs. Defending against SQAs requires a slight but artful
variation of outputs due to the service purpose for users, who share the same
output information with SQAs. In this paper, we propose a real-world defense by
Unifying Gradients (UniG) of different data so that SQAs could only probe a
much weaker attack direction that is similar for different samples. Since such
universal attack perturbations have been validated as less aggressive than the
input-specific perturbations, UniG protects real-world DNNs by indicating
attackers a twisted and less informative attack direction. We implement UniG
efficiently by a Hadamard product module which is plug-and-play. According to
extensive experiments on 5 SQAs, 2 adaptive attacks and 7 defense baselines,
UniG significantly improves real-world robustness without hurting clean
accuracy on CIFAR10 and ImageNet. For instance, UniG maintains a model of
77.80% accuracy under 2500-query Square attack while the state-of-the-art
adversarially-trained model only has 67.34% on CIFAR10. Simultaneously, UniG
outperforms all compared baselines in terms of clean accuracy and achieves the
smallest modification of the model output. The code is released at
https://github.com/snowien/UniG-pytorch
The emerging landscape of Social Media Data Collection: anticipating trends and addressing future challenges
[spa] Las redes sociales se han convertido en una herramienta poderosa para crear y compartir contenido generado por usuarios en todo internet. El amplio uso de las redes sociales ha llevado a generar una enorme cantidad de información, presentando una gran oportunidad para el marketing digital. A través de las redes sociales, las empresas pueden llegar a millones de consumidores potenciales y capturar valiosos datos de los consumidores, que se pueden utilizar para optimizar estrategias y acciones de marketing. Los beneficios y desafíos potenciales de utilizar las redes sociales para el marketing digital también están creciendo en interés entre la comunidad académica. Si bien las redes sociales ofrecen a las empresas la oportunidad de llegar a una gran audiencia y recopilar valiosos datos de los consumidores, el volumen de información generada puede llevar a un marketing sin enfoque y consecuencias negativas como la sobrecarga social. Para aprovechar al máximo el marketing en redes sociales, las empresas necesitan recopilar datos confiables para propósitos específicos como vender productos, aumentar la conciencia de marca o fomentar el compromiso y para predecir los comportamientos futuros de los consumidores. La disponibilidad de datos de calidad puede ayudar a construir la lealtad a la marca, pero la disposición de los consumidores a compartir información depende de su nivel de confianza en la empresa o marca que lo solicita. Por lo tanto, esta tesis tiene como objetivo contribuir a la brecha de investigación a través del análisis bibliométrico del campo, el análisis mixto de perfiles y motivaciones de los usuarios que proporcionan sus datos en redes sociales y una comparación de algoritmos supervisados y no supervisados para agrupar a los consumidores. Esta investigación ha utilizado una base de datos de más de 5,5 millones de colecciones de datos durante un período de 10 años. Los avances tecnológicos ahora permiten el análisis sofisticado y las predicciones confiables basadas en los datos capturados, lo que es especialmente útil para el marketing digital. Varios estudios han explorado el marketing digital a través de las redes sociales, algunos centrándose en un campo específico, mientras que otros adoptan un enfoque multidisciplinario. Sin embargo, debido a la naturaleza rápidamente evolutiva de la disciplina, se requiere un enfoque bibliométrico para capturar y sintetizar la información más actualizada y agregar más valor a los estudios en el campo. Por lo tanto, las contribuciones de esta tesis son las siguientes. En primer lugar, proporciona una revisión exhaustiva de la literatura sobre los métodos para recopilar datos personales de los consumidores de las redes sociales para el marketing digital y establece las tendencias más relevantes a través del análisis de artículos significativos, palabras clave, autores, instituciones y países. En segundo lugar, esta tesis identifica los perfiles de usuario que más mienten y por qué. Específicamente, esta investigación demuestra que algunos perfiles de usuario están más inclinados a cometer errores, mientras que otros proporcionan información falsa intencionalmente. El estudio también muestra que las principales motivaciones detrás de proporcionar información falsa incluyen la diversión y la falta de confianza en las medidas de privacidad y seguridad de los datos. Finalmente, esta tesis tiene como objetivo llenar el vacío en la literatura sobre qué algoritmo, supervisado o no supervisado, puede agrupar mejor a los consumidores que proporcionan sus datos en las redes sociales para predecir su comportamiento futuro
Modeling Events and Interactions through Temporal Processes -- A Survey
In real-world scenario, many phenomena produce a collection of events that
occur in continuous time. Point Processes provide a natural mathematical
framework for modeling these sequences of events. In this survey, we
investigate probabilistic models for modeling event sequences through temporal
processes. We revise the notion of event modeling and provide the mathematical
foundations that characterize the literature on the topic. We define an
ontology to categorize the existing approaches in terms of three families:
simple, marked, and spatio-temporal point processes. For each family, we
systematically review the existing approaches based based on deep learning.
Finally, we analyze the scenarios where the proposed techniques can be used for
addressing prediction and modeling aspects.Comment: Image replacement
Six Human-Centered Artificial Intelligence Grand Challenges
Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies
Representation Learning for Texts and Graphs: A Unified Perspective on Efficiency, Multimodality, and Adaptability
[...] This thesis is situated between natural language processing and graph representation learning and investigates selected connections. First, we introduce matrix embeddings as an efficient text representation sensitive to word order. [...] Experiments with ten linguistic probing tasks, 11 supervised, and five unsupervised downstream tasks reveal that vector and matrix embeddings have complementary strengths and that a jointly trained hybrid model outperforms both. Second, a popular pretrained language model, BERT, is distilled into matrix embeddings. [...] The results on the GLUE benchmark show that these models are competitive with other recent contextualized language models while being more efficient in time and space. Third, we compare three model types for text classification: bag-of-words, sequence-, and graph-based models. Experiments on five datasets show that, surprisingly, a wide multilayer perceptron on top of a bag-of-words representation is competitive with recent graph-based approaches, questioning the necessity of graphs synthesized from the text. [...] Fourth, we investigate the connection between text and graph data in document-based recommender systems for citations and subject labels. Experiments on six datasets show that the title as side information improves the performance of autoencoder models. [...] We find that the meaning of item co-occurrence is crucial for the choice of input modalities and an appropriate model. Fifth, we introduce a generic framework for lifelong learning on evolving graphs in which new nodes, edges, and classes appear over time. [...] The results show that by reusing previous parameters in incremental training, it is possible to employ smaller history sizes with only a slight decrease in accuracy compared to training with complete history. Moreover, weighting the binary cross-entropy loss function is crucial to mitigate the problem of class imbalance when detecting newly emerging classes. [...
Learning Disentangled Representations in Signed Directed Graphs without Social Assumptions
Signed graphs are complex systems that represent trust relationships or
preferences in various domains. Learning node representations in such graphs is
crucial for many mining tasks. Although real-world signed relationships can be
influenced by multiple latent factors, most existing methods often oversimplify
the modeling of signed relationships by relying on social theories and treating
them as simplistic factors. This limits their expressiveness and their ability
to capture the diverse factors that shape these relationships. In this paper,
we propose DINES, a novel method for learning disentangled node representations
in signed directed graphs without social assumptions. We adopt a disentangled
framework that separates each embedding into distinct factors, allowing for
capturing multiple latent factors. We also explore lightweight graph
convolutions that focus solely on sign and direction, without depending on
social theories. Additionally, we propose a decoder that effectively classifies
an edge's sign by considering correlations between the factors. To further
enhance disentanglement, we jointly train a self-supervised factor
discriminator with our encoder and decoder. Throughout extensive experiments on
real-world signed directed graphs, we show that DINES effectively learns
disentangled node representations, and significantly outperforms its
competitors in the sign prediction task.Comment: 26 pages, 11 figure
Efficient Path Enumeration and Structural Clustering on Massive Graphs
Graph analysis plays a crucial role in understanding the relationships and structures within complex systems. This thesis focuses on addressing fundamental problems in graph analysis, including hop-constrained s-t simple path (HC-s-t path) enumeration, batch HC-s-t path query processing, and graph structural clustering (SCAN). The objective is to develop efficient and scalable distributed algorithms to tackle these challenges, particularly in the context of billion-scale graphs.
We first explore the problem of HC-s-t path enumeration. Existing solutions for this problem often suffer from inefficiency and scalability limitations, especially when dealing with billion-scale graphs. To overcome these drawbacks, we propose a novel hybrid search paradigm specifically tailored for HC-s-t path enumeration. This paradigm combines different search strategies to effectively explore the solution space. Building upon this paradigm, we devise a distributed enumeration algorithm that follows a divide-and-conquer strategy, incorporates fruitless exploration pruning, and optimizes memory consumption. Experimental evaluations on various datasets demonstrate that our algorithm achieves a significant speedup compared to existing solutions, even on datasets where they encounter out-of-memory issues.
Secondly, we address the problem of batch HC-s-t path query processing. In real-world scenarios, it is common to issue multiple HC-s-t path queries simultaneously and process them as a batch. However, existing solutions often focus on optimizing the processing performance of individual queries, disregarding the benefits of processing queries concurrently. To bridge this gap, we propose the concept of HC-s path queries, which captures the common computation among different queries. We design a two-phase HC-s path query detection algorithm to identify the shared computation for a given set of HC-s-t path queries. Based on the detected HC-s path queries, we develop an efficient HC-s-t path enumeration algorithm that effectively shares the common computation. Extensive experiments on diverse datasets validate the efficiency and scalability of our algorithm for processing multiple HC-s-t path queries concurrently.
Thirdly, we investigate the problem of graph structural clustering (SCAN) in billion-scale graphs. Existing distributed solutions for SCAN often lack efficiency or suffer from high memory consumption, making them impractical for large-scale graphs. To overcome these challenges, we propose a fine-grained clustering framework specifically tailored for SCAN. This framework enables effective identification of cohesive subgroups within a graph. Building upon this framework, we devise a distributed SCAN algorithm that minimizes communication overhead and reduces memory consumption throughout the execution. We also incorporate an effective workload balance mechanism that dynamically adjusts to handle skewed workloads. Experimental evaluations on real-world graphs demonstrate the efficiency and scalability of our proposed algorithm.
Overall, this thesis contributes novel distributed algorithms for HC-s-t path enumeration, batch HC-s-t path query processing, and graph structural clustering. The proposed algorithms address the efficiency and scalability challenges in graph analysis, particularly on billion-scale graphs. Extensive experimental evaluations validate the superiority of our algorithms compared to existing solutions, enabling efficient and scalable graph analysis in complex systems
Robust Multimodal Failure Detection for Microservice Systems
Proactive failure detection of instances is vitally essential to microservice
systems because an instance failure can propagate to the whole system and
degrade the system's performance. Over the years, many single-modal (i.e.,
metrics, logs, or traces) data-based nomaly detection methods have been
proposed. However, they tend to miss a large number of failures and generate
numerous false alarms because they ignore the correlation of multimodal data.
In this work, we propose AnoFusion, an unsupervised failure detection approach,
to proactively detect instance failures through multimodal data for
microservice systems. It applies a Graph Transformer Network (GTN) to learn the
correlation of the heterogeneous multimodal data and integrates a Graph
Attention Network (GAT) with Gated Recurrent Unit (GRU) to address the
challenges introduced by dynamically changing multimodal data. We evaluate the
performance of AnoFusion through two datasets, demonstrating that it achieves
the F1-score of 0.857 and 0.922, respectively, outperforming the
state-of-the-art failure detection approaches
FreSh: A Lock-Free Data Series Index
We present FreSh, a lock-free data series index that exhibits good
performance (while being robust). FreSh is based on Refresh, which is a generic
approach we have developed for supporting lock-freedom in an efficient way on
top of any localityaware data series index. We believe Refresh is of
independent interest and can be used to get well-performed lock-free versions
of other locality-aware blocking data structures. For developing FreSh, we
first studied in depth the design decisions of current state-of-the-art data
series indexes, and the principles governing their performance. This led to a
theoretical framework, which enables the development and analysis of data
series indexes in a modular way. The framework allowed us to apply Refresh,
repeatedly, to get lock-free versions of the different phases of a family of
data series indexes. Experiments with several synthetic and real datasets
illustrate that FreSh achieves performance that is as good as that of the
state-of-the-art blocking in-memory data series index. This shows that the
helping mechanisms of FreSh are light-weight, respecting certain principles
that are crucial for performance in locality-aware data structures.This paper
was published in SRDS 2023.Comment: 12 pages, 18 figures, Conference: Symposium on Reliable Distributed
Systems (SRDS 2023
- …