922 research outputs found
Granular-ball computing: an efficient, robust, and interpretable adaptive multi-granularity representation and computation method
Human cognition operates on a "Global-first" cognitive mechanism,
prioritizing information processing based on coarse-grained details. This
mechanism inherently possesses an adaptive multi-granularity description
capacity, resulting in computational traits such as efficiency, robustness, and
interpretability. The analysis pattern reliance on the finest granularity and
single-granularity makes most existing computational methods less efficient,
robust, and interpretable, which is an important reason for the current lack of
interpretability in neural networks. Multi-granularity granular-ball computing
employs granular-balls of varying sizes to daptively represent and envelop the
sample space, facilitating learning based on these granular-balls. Given that
the number of coarse-grained "granular-balls" is fewer than sample points,
granular-ball computing proves more efficient. Moreover, the inherent
coarse-grained nature of granular-balls reduces susceptibility to fine-grained
sample disturbances, enhancing robustness. The multi-granularity construct of
granular-balls generates topological structures and coarse-grained
descriptions, naturally augmenting interpretability. Granular-ball computing
has successfully ventured into diverse AI domains, fostering the development of
innovative theoretical methods, including granular-ball classifiers, clustering
techniques, neural networks, rough sets, and evolutionary computing. This has
notably ameliorated the efficiency, noise robustness, and interpretability of
traditional methods. Overall, granular-ball computing is a rare and innovative
theoretical approach in AI that can adaptively and simultaneously enhance
efficiency, robustness, and interpretability. This article delves into the main
application landscapes for granular-ball computing, aiming to equip future
researchers with references and insights to refine and expand this promising
theory
A Survey on Feature Selection Algorithms
One major component of machine learning is feature analysis which comprises of mainly two processes: feature selection and feature extraction. Due to its applications in several areas including data mining, soft computing and big data analysis, feature selection has got a reasonable importance. This paper presents an introductory concept of feature selection with various inherent approaches. The paper surveys historic developments reported in feature selection with supervised and unsupervised methods. The recent developments with the state of the art in the on-going feature selection algorithms have also been summarized in the paper including their hybridizations.
DOI: 10.17762/ijritcc2321-8169.16043
GraphVid: It Only Takes a Few Nodes to Understand a Video
We propose a concise representation of videos that encode perceptually
meaningful features into graphs. With this representation, we aim to leverage
the large amount of redundancies in videos and save computations. First, we
construct superpixel-based graph representations of videos by considering
superpixels as graph nodes and create spatial and temporal connections between
adjacent superpixels. Then, we leverage Graph Convolutional Networks to process
this representation and predict the desired output. As a result, we are able to
train models with much fewer parameters, which translates into short training
periods and a reduction in computation resource requirements. A comprehensive
experimental study on the publicly available datasets Kinetics-400 and Charades
shows that the proposed method is highly cost-effective and uses limited
commodity hardware during training and inference. It reduces the computational
requirements 10-fold while achieving results that are comparable to
state-of-the-art methods. We believe that the proposed approach is a promising
direction that could open the door to solving video understanding more
efficiently and enable more resource limited users to thrive in this research
field.Comment: Accepted to ECCV2022 (Oral
gSuite: A Flexible and Framework Independent Benchmark Suite for Graph Neural Network Inference on GPUs
As the interest to Graph Neural Networks (GNNs) is growing, the importance of
benchmarking and performance characterization studies of GNNs is increasing. So
far, we have seen many studies that investigate and present the performance and
computational efficiency of GNNs. However, the work done so far has been
carried out using a few high-level GNN frameworks. Although these frameworks
provide ease of use, they contain too many dependencies to other existing
libraries. The layers of implementation details and the dependencies complicate
the performance analysis of GNN models that are built on top of these
frameworks, especially while using architectural simulators. Furthermore,
different approaches on GNN computation are generally overlooked in prior
characterization studies, and merely one of the common computational models is
evaluated. Based on these shortcomings and needs that we observed, we developed
a benchmark suite that is framework independent, supporting versatile
computational models, easily configurable and can be used with architectural
simulators without additional effort.
Our benchmark suite, which we call gSuite, makes use of only hardware
vendor's libraries and therefore it is independent of any other frameworks.
gSuite enables performing detailed performance characterization studies on GNN
Inference using both contemporary GPU profilers and architectural GPU
simulators. To illustrate the benefits of our new benchmark suite, we perform a
detailed characterization study with a set of well-known GNN models with
various datasets; running gSuite both on a real GPU card and a timing-detailed
GPU simulator. We also implicate the effect of computational models on
performance. We use several evaluation metrics to rigorously measure the
performance of GNN computation.Comment: IEEE International Symposium on Workload Characterization (IISWC)
202
An Efficient Anomaly Detection Through Optimized Navigation Using Dlvq-Cdma And H-Dso In Healthcare Iot Environment
An Anomaly detection (AD) framework intends to discover irregular data and also unusable activities in a system. The abnormality in the healthcare information is picked up by the AD in the healthcare system and then, the outcome is updated for the authority to evaluate the data. Numerous researchers have developed an AD method that has the disadvantage of data loss issues and complexity in computation. An enhanced AD framework utilizing Deep Learning Vector Quantization-Correlation Distance Mayfly Algorithm (DLVQ-CDMA) and Hyper-sphere Dolphin Swarm Optimization (H-DSO) methodology is presented in this work to overcome these disadvantages. By aid of the Internet of Things (IoT)-connected systems, proffered model gathers information about the patient and as well forwards the information to patient's health care application. Information from health care application is then sent via the optimal path by utilizing the H-DSO method. The data is uploaded to the cloud server later and then, it is recovered and provided to the AD system. The data is then pre-processed in an AD system. After extricating the features, the feature reduction is performed by employing the Entropy-Generalized Discriminant Analysis(E-GDA) scheme. Subsequently, the DLVQ-CDMA algorithm is utilized with the required features. Information is formerly categorized as usual data or irregularity data. data, which is attacked is stored in the log file and the normal data will undergo further evaluation for the identification of the presence of disease or disorder. After evaluation, the outcome is communicated to the patient. The experiential analysis specifies that the proffered DLVQ-CDMA methodology executes better than the prevailing methodologies
A Survey of Graph Pre-processing Methods: From Algorithmic to Hardware Perspectives
Graph-related applications have experienced significant growth in academia
and industry, driven by the powerful representation capabilities of graph.
However, efficiently executing these applications faces various challenges,
such as load imbalance, random memory access, etc. To address these challenges,
researchers have proposed various acceleration systems, including software
frameworks and hardware accelerators, all of which incorporate graph
pre-processing (GPP). GPP serves as a preparatory step before the formal
execution of applications, involving techniques such as sampling, reorder, etc.
However, GPP execution often remains overlooked, as the primary focus is
directed towards enhancing graph applications themselves. This oversight is
concerning, especially considering the explosive growth of real-world graph
data, where GPP becomes essential and even dominates system running overhead.
Furthermore, GPP methods exhibit significant variations across devices and
applications due to high customization. Unfortunately, no comprehensive work
systematically summarizes GPP. To address this gap and foster a better
understanding of GPP, we present a comprehensive survey dedicated to this area.
We propose a double-level taxonomy of GPP, considering both algorithmic and
hardware perspectives. Through listing relavent works, we illustrate our
taxonomy and conduct a thorough analysis and summary of diverse GPP techniques.
Lastly, we discuss challenges in GPP and potential future directions
- …