340 research outputs found
High Dimensional Data Set Analysis Using a Large-Scale Manifold Learning Approach
Because of technological advances, a trend occurs for data sets increasing in size and dimensionality. Processing these large scale data sets is challenging for conventional computers due to computational limitations. A framework for nonlinear dimensionality reduction on large databases is presented that alleviates the issue of large data sets through sampling, graph construction, manifold learning, and embedding. Neighborhood selection is a key step in this framework and a potential area of improvement. The standard approach to neighborhood selection is setting a fixed neighborhood. This could be a fixed number of neighbors or a fixed neighborhood size. Each of these has its limitations due to variations in data density. A novel adaptive neighbor-selection algorithm is presented to enhance performance by incorporating sparse â„“ 1-norm based optimization. These enhancements are applied to the graph construction and embedding modules of the original framework. As validation of the proposed â„“1-based enhancement, experiments are conducted on these modules using publicly available benchmark data sets. The two approaches are then applied to a large scale magnetic resonance imaging (MRI) data set for brain tumor progression prediction. Results showed that the proposed approach outperformed linear methods and other traditional manifold learning algorithms
Lanczos spectrum for random operator growth
Krylov methods have reappeared recently, connecting physically sensible
notions of complexity with quantum chaos and quantum gravity. In these
developments, the Hamiltonian and the Liouvillian are tridiagonalized so that
Schrodinger/Heisenberg time evolution is expressed in the Krylov basis. In the
context of Schrodinger evolution, this tridiagonalization has been carried out
in Random Matrix Theory. We extend these developments to Heisenberg time
evolution, describing how the Liouvillian can be tridiagonalized as well until
the end of Krylov space. We numerically verify the analytical formulas both for
Gaussian and non-Gaussian matrix models.Comment: 20 pages, 5 figuress; v2: typos corrected, references adde
On the Rotar central limit theorem for sums of a random number of independent random variables
The Rotar central limit theorem is a remarkable theorem in the non-classical
version since it does not use the condition of asymptotic infinitesimality for
the independent individual summands, unlike the theorems named Lindeberg's and
Lindeberg-Feller's in the classical version. The Rotar central limit theorem
generalizes the classical Lindeberg-Feller central limit theorem since the
Rotar condition is weaker than Lindeberg's.
The main aim of this paper is to introduce the Rotar central limit theorem
for sums of a random number of independent (not necessarily identically
distributed) random variables and the conditions for its validity. The order of
approximation in this theorem is also considered in this paper.Comment: 15 page
Analytical dipole moment functions for diatomic molecules :
We require the y-series expansion to satisfy the condition that the infinite sum of its coefficients M(, n) vanishes. Then, expressing M(, n) as some function of the index n and several parameters, we fit this function to a few known transition moments and obtain an infinite y-series representation with the correct asymptotic behavior for the CO dipole moment. We found three functional forms for M(, n) that produce infinite series reducible to closed forms. These new forms are adjusted further by a corrective term so that they obtain the correct general behavior at both large r and small r. The various CO dipole moment functions finally are used to predict hot-band transition moments.The dipole moment of the ground electronic state (X('1)(SUMM)('+)) of CO as a function of the internuclear distance is determined using experimentally deduced rotationless vibrational transition moments. For this purpose, the dipole moment function is expanded in series of powers of the variables u, y, and z, where u=r-r(, e), y=1-exp(-au), and z=exp(au)-1, and exact Morse matrix elements of these quantities are used in computation. Using a standard factorization technique, we derive exact matrix elements of y, y('2), and y('3). For higher powers of y, we use matrix multiplication. The eigenfunctions of the perturbed Morse oscillator (PMO) are obtained by the method of matrix diagonalization. Morse and PMO cubic dipole moment functions in u, y and z are then determined for CO
Directed hypergraph neural network
To deal with irregular data structure, graph convolution neural networks have
been developed by a lot of data scientists. However, data scientists just have
concentrated primarily on developing deep neural network method for un-directed
graph. In this paper, we will present the novel neural network method for
directed hypergraph. In the other words, we will develop not only the novel
directed hypergraph neural network method but also the novel directed
hypergraph based semi-supervised learning method. These methods are employed to
solve the node classification task. The two datasets that are used in the
experiments are the cora and the citeseer datasets. Among the classic directed
graph based semi-supervised learning method, the novel directed hypergraph
based semi-supervised learning method, the novel directed hypergraph neural
network method that are utilized to solve this node classification task, we
recognize that the novel directed hypergraph neural network achieves the
highest accuracies
- …