14 research outputs found

    Comparative Analysis of Functionality and Aspects for Hybrid Recommender Systems

    Get PDF
    Recommender systems are gradually becoming the backbone of profitable business which interact with users mainly on the web stack. These systems are privileged to have large amounts of user interaction data used to improve them.  The systems utilize machine learning and data mining techniques to determine products and features to suggest different users correctly. This is an essential function since offering the right product at the right time might result in increased revenue. This paper gives focus on the importance of different kinds of hybrid recommenders. First, by explaining the various types of recommenders in use, then showing the need for hybrid systems and the multiple kinds before giving a comparative analysis of each of these. Keeping in mind that content-based, as well as collaborative filtering systems, are widely used, research is comparatively done with a keen interest on how this measures up to hybrid recommender systems

    Collaborative Decision-Making Using Spatiotemporal Graphs in Connected Autonomy

    Full text link
    Collaborative decision-making is an essential capability for multi-robot systems, such as connected vehicles, to collaboratively control autonomous vehicles in accident-prone scenarios. Under limited communication bandwidth, capturing comprehensive situational awareness by integrating connected agents' observation is very challenging. In this paper, we propose a novel collaborative decision-making method that efficiently and effectively integrates collaborators' representations to control the ego vehicle in accident-prone scenarios. Our approach formulates collaborative decision-making as a classification problem. We first represent sequences of raw observations as spatiotemporal graphs, which significantly reduce the package size to share among connected vehicles. Then we design a novel spatiotemporal graph neural network based on heterogeneous graph learning, which analyzes spatial and temporal connections of objects in a unified way for collaborative decision-making. We evaluate our approach using a high-fidelity simulator that considers realistic traffic, communication bandwidth, and vehicle sensing among connected autonomous vehicles. The experimental results show that our representation achieves over 100x reduction in the shared data size that meets the requirements of communication bandwidth for connected autonomous driving. In addition, our approach achieves over 30% improvements in driving safety

    Imitation Learning for Swarm Control using Variational Inference

    Get PDF
    Swarms are groups of robots that can coordinate, cooperate, and communicate to achieve tasks that may be impossible for a single robot. These systems exhibit complex dynamical behavior, similar to those observed in physics, neuroscience, finance, biology, social and communication networks, etc. For instance, in Biology, schools of fish, swarm of bacteria, colony of termites exhibit flocking behavior to achieve simple and complex tasks. Modeling the dynamics of flocking in animals is challenging as we usually do not have full knowledge of the dynamics of the system and how individual agent interact. The environment of swarms is also very noisy and chaotic. We usually only can observe the individual trajectories of the agents. This work presents a technique to learn how to discover and understand the underlying governing dynamics of these systems and how they interact from observation data alone using variational inference in an unsupervised manner. This is done by modeling the observed system dynamics as graphs and reconstructing the dynamics using variational autoencoders through multiple message passing operations in the encoder and decoder. By achieving this, we can apply our understanding of the complex behavior of swarm of animals to robotic systems to imitate flocking behavior of animals and perform decentralized control of robotic swarms. The approach relies on data-driven model discovery to learn local decentralized controllers that mimic the motion constraints and policies of animal flocks. To verify and validate this technique, experiments were done on observations from schools of fish and synthetic data from boids model

    Diffusion-Stego: Training-free Diffusion Generative Steganography via Message Projection

    Full text link
    Generative steganography is the process of hiding secret messages in generated images instead of cover images. Existing studies on generative steganography use GAN or Flow models to obtain high hiding message capacity and anti-detection ability over cover images. However, they create relatively unrealistic stego images because of the inherent limitations of generative models. We propose Diffusion-Stego, a generative steganography approach based on diffusion models which outperform other generative models in image generation. Diffusion-Stego projects secret messages into latent noise of diffusion models and generates stego images with an iterative denoising process. Since the naive hiding of secret messages into noise boosts visual degradation and decreases extracted message accuracy, we introduce message projection, which hides messages into noise space while addressing these issues. We suggest three options for message projection to adjust the trade-off between extracted message accuracy, anti-detection ability, and image quality. Diffusion-Stego is a training-free approach, so we can apply it to pre-trained diffusion models which generate high-quality images, or even large-scale text-to-image models, such as Stable diffusion. Diffusion-Stego achieved a high capacity of messages (3.0 bpp of binary messages with 98% accuracy, and 6.0 bpp with 90% accuracy) as well as high quality (with a FID score of 2.77 for 1.0 bpp on the FFHQ 64Ă—\times64 dataset) that makes it challenging to distinguish from real images in the PNG format

    From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers

    Full text link
    In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way. We show that recent results on linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention (Luo et al., 2021) are special cases of this general mechanism. However by casting the problem as a topological (graph-based) modulation of unmasked attention, we obtain several results unknown before, including efficient d-dimensional RPE-masking and graph-kernel masking. We leverage many mathematical techniques ranging from spectral analysis through dynamic programming and random walks to new algorithms for solving Markov processes on graphs. We provide a corresponding empirical evaluation.Comment: 20 pages, 12 figure

    A class-specific metaheuristic technique for explainable relevant feature selection.

    Get PDF
    A significant amount of previous research into feature selection has been aimed at developing methods that can derive variables that are relevant to an entire dataset. Although these approaches have revealed substantial improvements in classification accuracy, they have failed to address the problem of explainability of outputs. This paper seeks to address this problem of identifying explainable features using a class-specific feature selection method based on genetic algorithms and the one-vs-all strategy. Our proposed method finds relevant features for each class in the dataset and uses these features to enable more accurate classification, and also interpretation of the outputs. The results of our experiments demonstrate that the proposed method provides descriptive insights into prediction outputs, and also outperforms popular global feature selection techniques in the classifications of high dimensional and noisy datasets. Since there are no known challenging benchmark datasets for evaluating class-specific feature selection algorithms, this paper also recommends an approach for combining disparate datasets for this purpose

    On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps

    Get PDF
    Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is 25% of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders

    Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization

    Full text link
    Dynamic graph neural networks (DGNNs) are increasingly pervasive in exploiting spatio-temporal patterns on dynamic graphs. However, existing works fail to generalize under distribution shifts, which are common in real-world scenarios. As the generation of dynamic graphs is heavily influenced by latent environments, investigating their impacts on the out-of-distribution (OOD) generalization is critical. However, it remains unexplored with the following two major challenges: (1) How to properly model and infer the complex environments on dynamic graphs with distribution shifts? (2) How to discover invariant patterns given inferred spatio-temporal environments? To solve these challenges, we propose a novel Environment-Aware dynamic Graph LEarning (EAGLE) framework for OOD generalization by modeling complex coupled environments and exploiting spatio-temporal invariant patterns. Specifically, we first design the environment-aware EA-DGNN to model environments by multi-channel environments disentangling. Then, we propose an environment instantiation mechanism for environment diversification with inferred distributions. Finally, we discriminate spatio-temporal invariant patterns for out-of-distribution prediction by the invariant pattern recognition mechanism and perform fine-grained causal interventions node-wisely with a mixture of instantiated environment samples. Experiments on real-world and synthetic dynamic graph datasets demonstrate the superiority of our method against state-of-the-art baselines under distribution shifts. To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.Comment: Accepted by the 37th Conference on Neural Information Processing Systems (NeurIPS 2023
    corecore