10,246 research outputs found
Harmonic Labeling of Graphs
Which graphs admit an integer value harmonic function which is injective and
surjective onto ? Such a function, which we call harmonic labeling, is
constructed when the graph is the square grid. It is shown that for any
finite graph containing at least one edge, there is no harmonic labeling of
Harmonic Mean Cordial labeling of some graphs
All the graphs considered in this article are simple and undirected. Let G = (V(G), E(G)) be a simple undirected Graph. A function f : V (G) → {1, 2} is called Harmonic Mean Cordial if the induced function f*: E(G) → {1, 2} defined by f* (uv) = [2f(u)f(v)/f(u)+f(v)] satisfies the condition |vf (i) − vf (j)| ≤ 1 and |ef (i) − ef (j)| ≤ 1 for any i, j ∈ {1, 2}, where vf (x) and ef (x) denotes the number of vertices and number of edges with label x respectively and bxc denotes the greatest integer less than or equals to x. A Graph G is called Harmonic Mean Cordial graph if it admits Harmonic Mean Cordial labeling. In this article, we have provided some graphs which are not Harmonic Mean Cordial and also we have provided some graphs which are Harmonic Mean Cordial.Publisher's Versio
ODD HARMONIOUS LABELING ON SOME STRING GRAPH CLASSES
A graph with the labeling properties of odd harmonic is called an odd harmonious graph. The purpose of this research was to get labeling properties of odd harmonic on the class of string graphs. The research used was a qualitative research method. The result of the research was that the definition and construction of a string graph, the union of a string graph, and the multiple string graph are obtained. Furthermore, it has been proved that a string graph, the union of a string graph, and the multiple string graph is an odd harmonious graph
Hierarchical Subquery Evaluation for Active Learning on a Graph
To train good supervised and semi-supervised object classifiers, it is
critical that we not waste the time of the human experts who are providing the
training labels. Existing active learning strategies can have uneven
performance, being efficient on some datasets but wasteful on others, or
inconsistent just between runs on the same dataset. We propose perplexity based
graph construction and a new hierarchical subquery evaluation algorithm to
combat this variability, and to release the potential of Expected Error
Reduction.
Under some specific circumstances, Expected Error Reduction has been one of
the strongest-performing informativeness criteria for active learning. Until
now, it has also been prohibitively costly to compute for sizeable datasets. We
demonstrate our highly practical algorithm, comparing it to other active
learning measures on classification datasets that vary in sparsity,
dimensionality, and size. Our algorithm is consistent over multiple runs and
achieves high accuracy, while querying the human expert for labels at a
frequency that matches their desired time budget.Comment: CVPR 201
- …