141 research outputs found
On Fractional Approach to Analysis of Linked Networks
In this paper, we present the outer product decomposition of a product of
compatible linked networks. It provides a foundation for the fractional
approach in network analysis. We discuss the standard and Newman's
normalization of networks. We propose some alternatives for fractional
bibliographic coupling measures
Weighted degrees and truncated derived bibliographic networks
Large bibliographic networks are sparse -- the average node degree is small.
This is not necessarily true for their product -- in some cases, it can
``explode'' (it is not sparse, increases in time and space complexity). An
approach in such cases is to reduce the complexity of the problem by limiting
our attention to a selected subset of important nodes and computing with
corresponding truncated networks. The nodes can be selected by different
criteria. An option is to consider the most important nodes in the derived
network -- nodes with the largest weighted degree. It turns out that the
weighted degrees in the derived network can be computed efficiently without
computing the derived network itself
Inductive definition of two restricted classes of triangulations
AbstractThe inductive definitions of (i) the class of all triangulations (of the sphere) without vertices of degree 3; and (ii) the class of all triangulations with all vertices of even degree are given. The dual rules give us (i) the class of all 3-connected planar cubic graphs without triangles; and (ii) the class of all 3-connected bipartite planar cubic graphs (related to Barnette's hamiltonicity conjecture)
An inductive definition of the class of 3-connected quadrangulations of the plane
AbstractAn inductive definition of the class of all 3-connected quadrangulations of the plane is given. The dual inductive definition determines the class of all 3-connected 4-regular planar graphs
Exactly mergeable summaries
In the analysis of large/big data sets, aggregation (replacing values of a
variable over a group by a single value) is a standard way of reducing the size
(complexity) of the data. Data analysis programs provide different aggregation
functions.
Recently some books dealing with the theoretical and algorithmic background
of traditional aggregation functions were published. A problem with traditional
aggregation is that often too much information is discarded thus reducing the
precision of the obtained results. A much better, preserving more information,
summarization of original data can be achieved by representing aggregated data
using selected types of complex data.
In complex data analysis the measured values over a selected group are
aggregated into a complex object and not into a single value. Most
of the aggregation functions theory does not apply directly. In our
contribution, we present an attempt to start building a theoretical background
of complex aggregation.
We introduce and discuss exactly mergeable summaries for which it holds for
merging of disjoint sets of units
\[ \Sigma(A \cup B) = F( \Sigma(A),\Sigma(B)),\qquad \mbox{ for } \quad A\cap
B = \emptyset .\
- …