648 research outputs found
Graph Fuzzy System: Concepts, Models and Algorithms
Fuzzy systems (FSs) have enjoyed wide applications in various fields,
including pattern recognition, intelligent control, data mining and
bioinformatics, which is attributed to the strong interpretation and learning
ability. In traditional application scenarios, FSs are mainly applied to model
Euclidean space data and cannot be used to handle graph data of non-Euclidean
structure in nature, such as social networks and traffic route maps. Therefore,
development of FS modeling method that is suitable for graph data and can
retain the advantages of traditional FSs is an important research. To meet this
challenge, a new type of FS for graph data modeling called Graph Fuzzy System
(GFS) is proposed in this paper, where the concepts, modeling framework and
construction algorithms are systematically developed. First, GFS related
concepts, including graph fuzzy rule base, graph fuzzy sets and graph
consequent processing unit (GCPU), are defined. A GFS modeling framework is
then constructed and the antecedents and consequents of the GFS are presented
and analyzed. Finally, a learning framework of GFS is proposed, in which a
kernel K-prototype graph clustering (K2PGC) is proposed to develop the
construction algorithm for the GFS antecedent generation, and then based on
graph neural network (GNNs), consequent parameters learning algorithm is
proposed for GFS. Specifically, three different versions of the GFS
implementation algorithm are developed for comprehensive evaluations with
experiments on various benchmark graph classification datasets. The results
demonstrate that the proposed GFS inherits the advantages of both existing
mainstream GNNs methods and conventional FSs methods while achieving better
performance than the counterparts.Comment: This paper has been submitted to a journa
Canonical tensor model through data analysis -- Dimensions, topologies, and geometries --
The canonical tensor model (CTM) is a tensor model in Hamilton formalism and
is studied as a model for gravity in both classical and quantum frameworks. Its
dynamical variables are a canonical conjugate pair of real symmetric
three-index tensors, and a question in this model was how to extract spacetime
pictures from the tensors. We give such an extraction procedure by using two
techniques widely known in data analysis. One is the tensor-rank (or CP, etc.)
decomposition, which is a certain generalization of the singular value
decomposition of a matrix and decomposes a tensor into a number of vectors. By
regarding the vectors as points forming a space, topological properties can be
extracted by using the other data analysis technique called persistent
homology, and geometries by virtual diffusion processes over points. Thus, time
evolutions of the tensors in the CTM can be interpreted as topological and
geometric evolutions of spaces. We have performed some initial investigations
of the classical equation of motion of the CTM in terms of these techniques for
a homogeneous fuzzy circle and homogeneous two- and three-dimensional fuzzy
spheres as spaces, and have obtained agreement with the general relativistic
system obtained previously in a formal continuum limit of the CTM. It is also
demonstrated by some concrete examples that the procedure is general for any
dimensions and topologies, showing the generality of the CTM.Comment: 44 pages, 16 figures, minor correction
A methodology to compare dimensionality reduction algorithms in terms of loss of quality
Dimensionality Reduction (DR) is attracting more attention these days as a result of the increasing need to handle huge amounts of data effectively. DR methods allow the number of initial features to be reduced considerably until a set of them is found that allows the original properties of the data to be kept. However, their use entails an inherent loss of quality that is likely to affect the understanding of the data, in terms of data analysis. This loss of quality could be determinant when selecting a DR method, because of the nature of each method. In this paper, we propose a methodology that allows different DR methods to be analyzed and compared as regards the loss of quality produced by them. This methodology makes use of the concept of preservation of geometry (quality assessment criteria) to assess the loss of quality. Experiments have been carried out by using the most well-known DR algorithms and quality assessment criteria, based on the literature. These experiments have been applied on 12 real-world datasets. Results obtained so far show that it is possible to establish a method to select the most appropriate DR method, in terms of minimum loss of quality. Experiments have also highlighted some interesting relationships between the quality assessment criteria. Finally, the methodology allows the appropriate choice of dimensionality for reducing data to be established, whilst giving rise to a minimum loss of quality
Recommended from our members
Diagnosis of liver disease by computer- assisted imaging techniques: A literature review
Copyright © 2022 The authors. Diagnosis of liver disease using computer-aided detection (CAD) systems is one of the most efficient and cost-effective methods of medical image diagnosis. Accurate disease detection by using ultrasound images or other medical imaging modalities depends on the physician's or doctor's experience and skill. CAD systems have a critical role in helping experts make accurate and right-sized assessments. There are different types of CAD systems for diagnosing different diseases, and one of the applications is in liver disease diagnosis and detection by using intelligent algorithms to detect any abnormalities. Machine learning and deep learning algorithms and models play also a big role in this area. In this article, we tried to review the techniques which are utilized in different stages of CAD systems and pursue the methods used in preprocessing, extracting, and selecting features and classification. Also, different techniques are used to segment and analyze the liver ultrasound medical images, which is still a challenging approach to how to use these techniques and their technical and clinical effectiveness as a global approach
- …