39,215 research outputs found
Most primitive groups are full automorphism groups of edge-transitive hypergraphs
We prove that, for a primitive permutation group G acting on a set of size n,
other than the alternating group, the probability that Aut(X,Y^G) = G for a
random subset Y of X, tends to 1 as n tends to infinity. So the property of the
title holds for all primitive groups except the alternating groups and finitely
many others. This answers a question of M. Klin. Moreover, we give an upper
bound n^{1/2+\epsilon} for the minimum size of the edges in such a hypergraph.
This is essentially best possible.Comment: To appear in special issue of Journal of Algebra in memory of Akos
Seres
On The Robustness of a Neural Network
With the development of neural networks based machine learning and their
usage in mission critical applications, voices are rising against the
\textit{black box} aspect of neural networks as it becomes crucial to
understand their limits and capabilities. With the rise of neuromorphic
hardware, it is even more critical to understand how a neural network, as a
distributed system, tolerates the failures of its computing nodes, neurons, and
its communication channels, synapses. Experimentally assessing the robustness
of neural networks involves the quixotic venture of testing all the possible
failures, on all the possible inputs, which ultimately hits a combinatorial
explosion for the first, and the impossibility to gather all the possible
inputs for the second.
In this paper, we prove an upper bound on the expected error of the output
when a subset of neurons crashes. This bound involves dependencies on the
network parameters that can be seen as being too pessimistic in the average
case. It involves a polynomial dependency on the Lipschitz coefficient of the
neurons activation function, and an exponential dependency on the depth of the
layer where a failure occurs. We back up our theoretical results with
experiments illustrating the extent to which our prediction matches the
dependencies between the network parameters and robustness. Our results show
that the robustness of neural networks to the average crash can be estimated
without the need to neither test the network on all failure configurations, nor
access the training set used to train the network, both of which are
practically impossible requirements.Comment: 36th IEEE International Symposium on Reliable Distributed Systems 26
- 29 September 2017. Hong Kong, Chin
Angular behavior of the absorption limit in thin film silicon solar cells
We investigate the angular behavior of the upper bound of absorption provided
by the guided modes in thin film solar cells. We show that the 4n^2 limit can
be potentially exceeded in a wide angular and wavelength range using
two-dimensional periodic thin film structures. Two models are used to estimate
the absorption enhancement; in the first one, we apply the periodicity
condition along the thickness of the thin film structure but in the second one,
we consider imperfect confinement of the wave to the device. To extract the
guided modes, we use an automatized procedure which is established in this
work. Through examples, we show that from the optical point of view, thin film
structures have a high potential to be improved by changing their shape. Also,
we discuss the nature of different optical resonances which can be potentially
used to enhance light trapping in the solar cell. We investigate the two
different polarization directions for one-dimensional gratings and we show that
the transverse magnetic polarization can provide higher values of absorption
enhancement. We also propose a way to reduce the angular dependence of the
solar cell efficiency by the appropriate choice of periodic pattern. Finally,
to get more practical values for the absorption enhancement, we consider the
effect of parasitic loss which can significantly reduce the enhancement factor
Core Decomposition in Multilayer Networks: Theory, Algorithms, and Applications
Multilayer networks are a powerful paradigm to model complex systems, where
multiple relations occur between the same entities. Despite the keen interest
in a variety of tasks, algorithms, and analyses in this type of network, the
problem of extracting dense subgraphs has remained largely unexplored so far.
In this work we study the problem of core decomposition of a multilayer
network. The multilayer context is much challenging as no total order exists
among multilayer cores; rather, they form a lattice whose size is exponential
in the number of layers. In this setting we devise three algorithms which
differ in the way they visit the core lattice and in their pruning techniques.
We then move a step forward and study the problem of extracting the
inner-most (also known as maximal) cores, i.e., the cores that are not
dominated by any other core in terms of their core index in all the layers.
Inner-most cores are typically orders of magnitude less than all the cores.
Motivated by this, we devise an algorithm that effectively exploits the
maximality property and extracts inner-most cores directly, without first
computing a complete decomposition.
Finally, we showcase the multilayer core-decomposition tool in a variety of
scenarios and problems. We start by considering the problem of densest-subgraph
extraction in multilayer networks. We introduce a definition of multilayer
densest subgraph that trades-off between high density and number of layers in
which the high density holds, and exploit multilayer core decomposition to
approximate this problem with quality guarantees. As further applications, we
show how to utilize multilayer core decomposition to speed-up the extraction of
frequent cross-graph quasi-cliques and to generalize the community-search
problem to the multilayer setting
- …