9,480 research outputs found

    The Distribution of the Largest Non-trivial Eigenvalues in Families of Random Regular Graphs

    Full text link
    Recently Friedman proved Alon's conjecture for many families of d-regular graphs, namely that given any epsilon > 0 `most' graphs have their largest non-trivial eigenvalue at most 2 sqrt{d-1}+epsilon in absolute value; if the absolute value of the largest non-trivial eigenvalue is at most 2 sqrt{d-1} then the graph is said to be Ramanujan. These graphs have important applications in communication network theory, allowing the construction of superconcentrators and nonblocking networks, coding theory and cryptography. As many of these applications depend on the size of the largest non-trivial positive and negative eigenvalues, it is natural to investigate their distributions. We show these are well-modeled by the beta=1 Tracy-Widom distribution for several families. If the observed growth rates of the mean and standard deviation as a function of the number of vertices holds in the limit, then in the limit approximately 52% of d-regular graphs from bipartite families should be Ramanujan, and about 27% from non-bipartite families (assuming the largest positive and negative eigenvalues are independent).Comment: 23 pages, version 2 (MAJOR correction: see footnote 7 on page 7: the eigenvalue program unkowingly assumed the eigenvalues of the matrix were symmetric, which is only true for bipartite graphs; thus the second largest positive eigenvalue was returned instead of the largest non-trivial eigenvalue). To appear in Experimental Mathematic

    Universality in a class of fragmentation-coalescence processes

    Get PDF
    We introduce and analyse a class of fragmentation-coalescence processes defined on finite systems of particles organised into clusters. Coalescent events merge multiple clusters simultaneously to form a single larger cluster, while fragmentation breaks up a cluster into a collection of singletons. Under mild conditions on the coalescence rates, we show that the distribution of cluster sizes becomes non- random in the thermodynamic limit. Moreover, we discover that in the limit of small fragmentation rate these processes exhibit self-organised criticality in the cluster size distribution, with universal exponent 3/2.Comment: 17 pages, 1 figur

    FedRR: a federated resource reservation algorithm for multimedia services

    Get PDF
    The Internet is rapidly evolving towards a multimedia service delivery platform. However, existing Internet-based content delivery approaches have several disadvantages, such as the lack of Quality of Service (QoS) guarantees. Future Internet research has presented several promising ideas to solve the issues related to the current Internet, such as federations across network domains and end-to-end QoS reservations. This paper presents an architecture for the delivery of multimedia content across the Internet, based on these novel principles. It facilitates the collaboration between the stakeholders involved in the content delivery process, allowing them to set up loosely-coupled federations. More specifically, the Federated Resource Reservation (FedRR) algorithm is proposed. It identifies suitable federation partners, selects end-to-end paths between content providers and their customers, and optimally configures intermediary network and infrastructure resources in order to satisfy the requested QoS requirements and minimize delivery costs

    Private Matchings and Allocations

    Get PDF
    We consider a private variant of the classical allocation problem: given k goods and n agents with individual, private valuation functions over bundles of goods, how can we partition the goods amongst the agents to maximize social welfare? An important special case is when each agent desires at most one good, and specifies her (private) value for each good: in this case, the problem is exactly the maximum-weight matching problem in a bipartite graph. Private matching and allocation problems have not been considered in the differential privacy literature, and for good reason: they are plainly impossible to solve under differential privacy. Informally, the allocation must match agents to their preferred goods in order to maximize social welfare, but this preference is exactly what agents wish to hide. Therefore, we consider the problem under the relaxed constraint of joint differential privacy: for any agent i, no coalition of agents excluding i should be able to learn about the valuation function of agent i. In this setting, the full allocation is no longer published---instead, each agent is told what good to get. We first show that with a small number of identical copies of each good, it is possible to efficiently and accurately solve the maximum weight matching problem while guaranteeing joint differential privacy. We then consider the more general allocation problem, when bidder valuations satisfy the gross substitutes condition. Finally, we prove that the allocation problem cannot be solved to non-trivial accuracy under joint differential privacy without requiring multiple copies of each type of good.Comment: Journal version published in SIAM Journal on Computation; an extended abstract appeared in STOC 201

    Mobile, collaborative augmented reality using cloudlets

    Get PDF
    The evolution in mobile applications to support advanced interactivity and demanding multimedia features is still ongoing. Novel application concepts (e.g. mobile Augmented Reality (AR)) are however hindered by the inherently limited resources available on mobile platforms (not withstanding the dramatic performance increases of mobile hardware). Offloading resource intensive application components to the cloud, also known as "cyber foraging", has proven to be a valuable solution in a variety of scenarios. However, also for collaborative scenarios, in which data together with its processing are shared between multiple users, this offloading concept is highly promising. In this paper, we investigate the challenges posed by offloading collaborative mobile applications. We present a middleware platform capable of autonomously deploying software components to minimize average CPU load, while guaranteeing smooth collaboration. As a use case, we present and evaluate a collaborative AR application, offering interaction between users, the physical environment as well as with the virtual objects superimposed on this physical environment

    What would a US policy of 'restraint' mean for the Warsaw NATO Summit?

    Get PDF
    This review discusses how a decision by the United States to pursue the policy recommended by Barry Posen in his book ‘Restraint’ would play out in the NATO alliance. The review does so by examining what ‘restraint’ would mean for how the alliance faces the problems of European divisions (including a potential Brexit and unravelling or fragmentation of the European Union), continued low levels of European defence spending, European perceptions of US indifference, high-handedness and isolationist attitudes in its presidential race, the spectre of Russian aggression and involvement in the Middle East, and ongoing debates about the alliance’s purpose in the face of challenges ranging from traditional military threats through to the refugee crisis in the Mediterranean

    Adaptive Hedge

    Full text link
    Most methods for decision-theoretic online learning are based on the Hedge algorithm, which takes a parameter called the learning rate. In most previous analyses the learning rate was carefully tuned to obtain optimal worst-case performance, leading to suboptimal performance on easy instances, for example when there exists an action that is significantly better than all others. We propose a new way of setting the learning rate, which adapts to the difficulty of the learning problem: in the worst case our procedure still guarantees optimal performance, but on easy instances it achieves much smaller regret. In particular, our adaptive method achieves constant regret in a probabilistic setting, when there exists an action that on average obtains strictly smaller loss than all other actions. We also provide a simulation study comparing our approach to existing methods.Comment: This is the full version of the paper with the same name that will appear in Advances in Neural Information Processing Systems 24 (NIPS 2011), 2012. The two papers are identical, except that this version contains an extra section of Additional Materia

    Resource-constrained classification using a cascade of neural network layers

    Get PDF
    Deep neural networks are the state of the art technique for a wide variety of classification problems. Although deeper networks are able to make more accurate classifications, the value brought by an additional hidden layer diminishes rapidly. Even shallow networks are able to achieve relatively good results on various classification problems. Only for a small subset of the samples do the deeper layers make a significant difference. We describe an architecture in which only the samples that can not be classified with a sufficient confidence by a shallow network have to be processed by the deeper layers. Instead of training a network with one output layer at the end of the network, we train several output layers, one for each hidden layer. When an output layer is sufficiently confident in this result, we stop propagating at this layer and the deeper layers need not be evaluated. The choice of a threshold confidence value allows us to trade-off accuracy and speed. Applied in the Internet-of-things (IoT) context, this approach makes it possible to distribute the layers of a neural network between low powered devices and powerful servers in the cloud. We only need the remote layers when the local layers are unable to make an accurate classification. Such an architecture adds the intelligence of a deep neural network to resource constrained devices such as sensor nodes and various IoT devices. We evaluated our approach on the MNIST and CIFAR10 datasets. On the MNIST dataset, we retain the same accuracy at half the computational cost. On the more difficult CIFAR10 dataset we were able to obtain a relative speed-up of 33% at an marginal increase in error rate from 15.3% to 15.8%
    corecore