930 research outputs found

    Random graph models for wireless communication networks

    Get PDF
    PhDThis thesis concerns mathematical models of wireless communication networks, in particular ad-hoc networks and 802:11 WLANs. In ad-hoc mode each of these devices may function as a sender, a relay or a receiver. Each device may only communicate with other devices within its transmission range. We use graph models for the relationship between any two devices: a node stands for a device, and an edge for a communication link, or sometimes an interference relationship. The number of edges incident on a node is the degree of this node. When considering geometric graphs, the coordinates of a node give the geographical position of a node. One of the important properties of a communication graph is its connectedness | whether all nodes can reach all other nodes. We use the term connectivity, the probability of graphs being connected given the number of nodes and the transmission range to measure the connectedness of a wireless network. Connectedness is an important prerequisite for all communication networks which communication between nodes. This is especially true for wireless ad-hoc networks, where communication relies on the contact among nodes and their neighbours. Another important property of an interference graph is its chromatic number | the minimum number of colours needed so that no adjacent nodes are assigned the same colour. Here adjacent nodes share an edge; adjacent edges share at least one node; and colours are used to identify di erent frequencies. This gives the minimum number of frequencies a network needs in order to attain zero interference. This problem can be solved as an optimization problem deterministically, but is algorithmically NP-hard. Hence, nding good asymptotic approximations for this value becomes important. Random geometric graphs describe an ensemble of graphs which share common features. In this thesis, node positions follow a Poisson point process or a binomial point process. We use probability theory to study the connectedness of random graphs and random geometric graphs, which is the fraction of connected graphs among many graph samples. This probability is closely related to the property of minimum node degree being at least unity. The chromatic number is closely related to the maximum degree as n ! 1; the chromatic number converges to maximum degree when graph is sparse. We test existing theorems and improve the existing ones when possible. These motivated me to study the degree of random (geometric) graph models. We study using deterministic methods some degree-related problems for Erda}os-R enyi random graphs G(n; p) and random geometric graphs G(n; r). I provide both theoretical analysis and accurate simulation results. The results lead to a study of dependence or non-dependence in the joint distribution of the degrees of neighbouring nodes. We study the probability of no node being isolated in G(n; p), that is, minimum node degree being at least unity. By making the assumption of non-dependence of node degree, we derive two asymptotics for this probability. The probability of no node being isolated is an approximation to the probability of the graph being connected. By making an analogy to G(n; p), we study this problem for G(n; r), which is a more realistic model for wireless networks. Experiment shows that this asymptotic result also works well for small graphs. We wish to nd the relationship between these basic features the above two important problems of wireless networks: the probability of a network being connected and the minimum number of channels a network needs in order to minimize interference. Inspired by the problem of maximum degree in random graphs, we study the problem of the maximum of a set of Poisson random variables and binomial random variables, which leads to two accurate formulae for the mode of the maximum for general random geometric graphs and for sparse random graphs. To our knowledge, these are the best results for sparse random geometric graphs in the literature so far. By approximating the node degrees as independent Poisson or binomial variables, we apply the result to the problem of maximum degree in general and sparse G(n; r), and derived much more accurate results than in the existing literature. Combining the limit theorem from Penrose and our work, we provide good approximations for the mode of the clique number and chromatic number in sparse G(n; r). Again these results are much more accurate than existing ones. This has implications for the interference minimization of WLANs. Finally, we apply our asymptotic result based on Poisson distribution for the chromatic number of random geometric graph to the interference minimization problem in IEEE 802:11b/g WLAN. Experiments based on the real planned position of the APs in WLANs show that our asymptotic results estimate the minimum number of channels needed accurately. This also means that sparse random geometric graphs are good models for interference minimization problem of WLANs. We discuss the interference minimization problem in single radio and multi-radio wireless networking scenarios. We study branchand- bound algorithms for these scenarios by selecting di erent constraint functions and objective functions

    Optimization for Networks and Object Recognition

    Get PDF
    The present thesis explores two different application areas of combinatorial optimization, the work presented, indeed, is two fold, since it deals with two distinct problems, one related to data transfer in networks and the other to object recognition. Caching is an essential technique to improve throughput and latency in a vast variety of applications. The core idea is to duplicate content in memories distributed across the network, which can then be exploited to deliver requested content with less congestion and delay. In particular, it has been shown that the use of caching together with smart offloading strategies in a RAN composed of evolved NodeBs (eNBs), AP (e.g., WiFi), and UEs, can significantly reduce the backhaul traffic and service latency. The traditional role of cache memories is to deliver the maximal amount of requested content locally rather than from a remote server. While this approach is optimal for single-cache systems, it has recently been shown to be, in general, significantly suboptimal for systems with multiple caches (i.e., cache networks) since it allows only additive caching gain, while instead, cache memories should be used to enable a multiplicative caching gain. Recent studies have shown that storing different portions of the content across the wireless network caches and capitalizing on the spatial reuse of device-to-device (D2D) communications, or exploiting globally cached information in order to multicast coded messages simultaneously useful to a large number of users, enables a global caching gain. We focus on the case of a single server (e.g., a base station) and multiple users, each of which caches segments of files in a finite library. Each user requests one (whole) file in the library and the server sends a common coded multicast message to satisfy all users at once. The problem consists of finding the smallest possible codeword length to satisfy such requests. To solve this problem we present two achievable caching and coded delivery scheme, and one correlation-aware caching scheme, each of them is based on a heuristic polynomial-time coloring algorithm. Automatic object recognition has become, over the last decades, a central toping the in the artificial intelligence research, with a a significant burt over the last new year with the advent of the deep learning paradigm. In this context, the objective of the work discussed in the last two chapter of this thesis is an attempt at improving the performance of a natural images classifier introducing in the loop knowledge coming from the real world, expressed in terms of probability of a set of spatial relations between the objects in the images. In different words, the framework presented in this work aims at integrating the output of standard classifiers on different image parts with some domain knowledge, encoded in a probabilistic ontology

    Overlapping and Robust Edge-Colored Clustering in Hypergraphs

    Full text link
    A recent trend in data mining has explored (hyper)graph clustering algorithms for data with categorical relationship types. Such algorithms have applications in the analysis of social, co-authorship, and protein interaction networks, to name a few. Many such applications naturally have some overlap between clusters, a nuance which is missing from current combinatorial models. Additionally, existing models lack a mechanism for handling noise in datasets. We address these concerns by generalizing Edge-Colored Clustering, a recent framework for categorical clustering of hypergraphs. Our generalizations allow for a budgeted number of either (a) overlapping cluster assignments or (b) node deletions. For each new model we present a greedy algorithm which approximately minimizes an edge mistake objective, as well as bicriteria approximations where the second approximation factor is on the budget. Additionally, we address the parameterized complexity of each problem, providing FPT algorithms and hardness results

    The role of graph entropy in fault localization and network evolution

    Get PDF
    The design of a communication network has a critical impact on its effectiveness at delivering service to the users of a large scale compute infrastructure. In particular, the reliability of such networks is increasingly vital in the modern world, as more and more of our commercial and social activity is conducted using digital platforms. Systems to assure service availability have been available since the emergence of Mainframes, with the System 360 in 1964, and although commercially widespread, the scientific understanding is not as deep as the problem warrants. The basic operating principle of most service assurance systems combines the gathering of status messages, which we term as events, with algorithms to deduce from the events where potential failures may be occurring. The algorithms to identify which events are causal, known as root cause analysis or fault localization, usually rely upon a detailed understanding of the network structure in order to determine those events that are most helpful in diagnosing and remediating a service threatening problem. The complex nature of root cause algorithms introduces scalability limits in terms of the number of events that can be processed per second. Unfortunately as networks grow, the volume of events produced continues to increase, often dramatically. The dependence of root cause analysis algorithms on network structure presents a significant challenge as networks continue to grow in scale and complexity. As a consequence of this, and the growing reliance upon networks as part of the key fabric of the modern economy, the commercial importance and the scale of the engineering challenges are increasing significantly. In this thesis I outline a novel approach to improving the scalability of event processing using a mathematical property of networks, graph entropy. In the first two papers described in this thesis, I apply an efficiently computable approximation of graph entropy to the problem of identifying important nodes in a network. In this context, importance is a measure of whether the failure of a node is more likely to result in a significant impact on the overall connectivity of the network, and therefore likely to lead to an interruption of service. I show that by ignoring events from unimportant network nodes it is possible to significantly reduce the event rate that a root cause algorithm needs to process. Further, I demonstrate that unimportant nodes produce very many events, but very few root causes. The consequence is that although some events relating to root causes are missed, this is compensated for by the reduction in overall event rate. This leads to a significant reduction of the event processing load on management systems, and therefore increases the effectiveness of current approaches to root cause analysis on large networks. Analysis of the topology data used in the first two papers revealed interesting anomalies in the degree distribution of the network nodes. This motivated the later focus of my research to investigate how graph entropy and network design considerations could be applied to the dynamical evolution of networks structures, most commonly described using the Preferential Attachment model of Barabási and Albert. A common feature of a communication network is the presence of a constraint on the number of logical or physical connections a device can support. In the last of the three papers in the thesis I develop and present a constrained model of network evolution, which demonstrates better quantitative agreement with real world networks than the preferential attachment model. This model, developed using the continuum approach, still does not address a fundamental question of random networks as a model of network evolution. Why should a node’s degree influence the likelihood of it acquiring connections? In the same paper I attempt to answer that question by outlining a model that links vertex entropy to a node’s attachment probability. The model successfully reproduces some of the characteristics of preferential attachment, and illustrates the potential for entropic arguments in network science. Put together, the two main bodies of work constitute a practical advance on the state of the art of fault localization, and a theoretical insight into the inner workings of dynamic networks. They open up a number of interesting avenues for further investigation

    Advancements in multi-view processing for reconstruction, registration and visualization.

    Get PDF
    The ever-increasing diffusion of digital cameras and the advancements in computer vision, image processing and storage capabilities have lead, in the latest years, to the wide diffusion of digital image collections. A set of digital images is usually referred as a multi-view images set when the pictures cover different views of the same physical object or location. In multi-view datasets, correlations between images are exploited in many different ways to increase our capability to gather enhanced understanding and information on a scene. For example, a collection can be enhanced leveraging on the camera position and orientation, or with information about the 3D structure of the scene. The range of applications of multi-view data is really wide, encompassing diverse fields such as image-based reconstruction, image-based localization, navigation of virtual environments, collective photographic retouching, computational photography, object recognition, etc. For all these reasons, the development of new algorithms to effectively create, process, and visualize this type of data is an active research trend. The thesis will present four different advancements related to different aspects of the multi-view data processing: - Image-based 3D reconstruction: we present a pre-processing algorithm, that is a special color-to-gray conversion. This was developed with the aim to improve the accuracy of image-based reconstruction algorithms. In particular, we show how different dense stereo matching results can be enhanced by application of a domain separation approach that pre-computes a single optimized numerical value for each image location. - Image-based appearance reconstruction: we present a multi-view processing algorithm, this can enhance the quality of the color transfer from multi-view images to a geo-referenced 3D model of a location of interest. The proposed approach computes virtual shadows and allows to automatically segment shadowed regions from the input images preventing to use those pixels in subsequent texture synthesis. - 2D to 3D registration: we present an unsupervised localization and registration system. This system can recognize a site that has been framed in a multi-view data and calibrate it on a pre-existing 3D representation. The system has a very high accuracy and it can validate the result in a completely unsupervised manner. The system accuracy is enough to seamlessly view input images correctly super-imposed on the 3D location of interest. - Visualization: we present PhotoCloud, a real-time client-server system for interactive exploration of high resolution 3D models and up to several thousand photographs aligned over this 3D data. PhotoCloud supports any 3D models that can be rendered in a depth-coherent way and arbitrary multi-view image collections. Moreover, it tolerates 2D-to-2D and 2D-to-3D misalignments, and it provides scalable visualization of generic integrated 2D and 3D datasets by exploiting data duality. A set of effective 3D navigation controls, tightly integrated with innovative thumbnail bars, enhances the user navigation. These advancements have been developed in tourism and cultural heritage application contexts, but they are not limited to these
    • …
    corecore