1,351,506 research outputs found
The Complexity of Rationalizing Network Formation
We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of "snapshots" of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [14]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone's valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is the best possible assuming P ≠ NP To do this we prove a tight (1/2 + δ) -approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LlN_(R_+) (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [10] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LIN_(R_+) may be of independent interest, given the generality of these problem
Low Complexity Encoding for Network Codes
In this paper we consider the per-node run-time complexity of network multicast codes. We show that the randomized algebraic network code design algorithms described extensively in the literature result in codes that on average require a number of operations that scales quadratically with the blocklength m of the codes. We then propose an alternative type of linear network code whose complexity scales linearly in m and still enjoys the attractive properties of random algebraic network codes. We also show that these codes are optimal in the sense that any rate-optimal linear network code must have at least a linear scaling in run-time complexity
Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets
A data augmentation methodology is presented and applied to generate a large
dataset of off-axis iris regions and train a low-complexity deep neural
network. Although of low complexity the resulting network achieves a high level
of accuracy in iris region segmentation for challenging off-axis eye-patches.
Interestingly, this network is also shown to achieve high levels of performance
for regular, frontal, segmentation of iris regions, comparing favorably with
state-of-the-art techniques of significantly higher complexity. Due to its
lower complexity, this network is well suited for deployment in embedded
applications such as augmented and mixed reality headsets
Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance
Research into active networking has provided the incentive to re-visit what
has traditionally been classified as distinct properties and characteristics of
information transfer such as protocol versus service; at a more fundamental
level this paper considers the blending of computation and communication by
means of complexity. The specific service examined in this paper is network
self-prediction enabled by Active Virtual Network Management Prediction.
Computation/communication is analyzed via Kolmogorov Complexity. The result is
a mechanism to understand and improve the performance of active networking and
Active Virtual Network Management Prediction in particular. The Active Virtual
Network Management Prediction mechanism allows information, in various states
of algorithmic and static form, to be transported in the service of prediction
for network management. The results are generally applicable to algorithmic
transmission of information. Kolmogorov Complexity is used and experimentally
validated as a theory describing the relationship among algorithmic
compression, complexity, and prediction accuracy within an active network.
Finally, the paper concludes with a complexity-based framework for Information
Assurance that attempts to take a holistic view of vulnerability analysis
Does network complexity help organize Babel's library?
In this work, we study properties of texts from the perspective of complex
network theory. Words in given texts are linked by co-occurrence and
transformed into networks, and we observe that these display topological
properties common to other complex systems. However, there are some properties
that seem to be exclusive to texts; many of these properties depend on the
frequency of words in the text, while others seem to be strictly determined by
the grammar. Precisely, these properties allow for a categorization of texts as
either with a sense and others encoded or senseless
Riemannian-geometric entropy for measuring network complexity
A central issue of the science of complex systems is the quantitative
characterization of complexity. In the present work we address this issue by
resorting to information geometry. Actually we propose a constructive way to
associate to a - in principle any - network a differentiable object (a
Riemannian manifold) whose volume is used to define an entropy. The
effectiveness of the latter to measure networks complexity is successfully
proved through its capability of detecting a classical phase transition
occurring in both random graphs and scale--free networks, as well as of
characterizing small Exponential random graphs, Configuration Models and real
networks.Comment: 15 pages, 3 figure
- …