329 research outputs found

    Using hypergraph theory to model coexistence management and coordinated spectrum allocation for heterogeneous wireless networks operating in shared spectrum

    Get PDF
    Electromagnetic waves in the Radio Frequency (RF) spectrum are used to convey wireless transmissions from one radio antenna to another. Spectrum utilisation factor, which refers to how readily a given spectrum can be reused across space and time while maintaining an acceptable level of transmission errors, is used to measure how efficiently a unit of frequency spectrum can be allocated to a specified number of users. The demand for wireless applications is increasing exponentially, hence there is a need for efficient management of the RF spectrum. However, spectrum usage studies have shown that the spectrum is under-utilised in space and time. A regulatory shift from static spectrum assignment to DSA is one way of addressing this. Licence exemption policy has also been advanced in Dynamic Spectrum Access (DSA) systems to spur wireless innovation and universal access to the internet. Furthermore, there is a shift from homogeneous to heterogeneous radio access and usage of the same spectrum band. These three shifts from traditional spectrum management have led to the challenge of coexistence among heterogeneous wireless networks which access the spectrum using DSA techniques. Cognitive radios have the ability for spectrum agility based on spectrum conditions. However, in the presence of multiple heterogeneous networks and without spectrum coordination, there is a challenge related to switching between available channels to minimise interference and maximise spectrum allocation. This thesis therefore focuses on the design of a framework for coexistence management and spectrum coordination, with the objective of maximising spectrum utilisation across geographical space and across time. The amount of geographical coverage in which a frequency can be used is optimised through frequency reuse while ensuring that harmful interference is minimised. The time during which spectrum is occupied is increased through time-sharing of the same spectrum by two or more networks, while ensuring that spectrum is shared by networks that can coexist in the same spectrum and that the total channel load is not excessive to prevent spectrum starvation. Conventionally, a graph is used to model relationships between entities such as interference relationships among networks. However, the concept of an edge in a graph is not sufficient to model relationships that involve more than two entities, such as more than two networks that are able to share the same channel in the time domain, because an edge can only connect two entities. On the other hand, a hypergraph is a generalisation of an undirected graph in which a hyperedge can connect more than two entities. Therefore, this thesis investigates the use of hypergraph theory to model the RF environment and the spectrum allocation scheme. The hypergraph model was applied to an algorithm for spectrum sharing among 100 heterogeneous wireless networks, whose geo-locations were randomly and independently generated in a 50 km by 50 km area. Simulation results for spectrum utilisation performance have shown that the hypergraph-based model allocated channels, on average, to 8% more networks than the graph-based model. The results also show that, for the same RF environment, the hypergraph model requires up to 36% fewer channels to achieve, on average, 100% operational networks, than the graph model. The rate of growth of the running time of the hypergraph-based algorithm with respect to the input size is equal to the square of the input size, like the graph-based algorithm. Thus, the model achieved better performance at no additional time complexity.Electromagnetic waves in the Radio Frequency (RF) spectrum are used to convey wireless transmissions from one radio antenna to another. Spectrum utilisation factor, which refers to how readily a given spectrum can be reused across space and time while maintaining an acceptable level of transmission errors, is used to measure how efficiently a unit of frequency spectrum can be allocated to a specified number of users. The demand for wireless applications is increasing exponentially, hence there is a need for efficient management of the RF spectrum. However, spectrum usage studies have shown that the spectrum is under-utilised in space and time. A regulatory shift from static spectrum assignment to DSA is one way of addressing this. Licence exemption policy has also been advanced in Dynamic Spectrum Access (DSA) systems to spur wireless innovation and universal access to the internet. Furthermore, there is a shift from homogeneous to heterogeneous radio access and usage of the same spectrum band. These three shifts from traditional spectrum management have led to the challenge of coexistence among heterogeneous wireless networks which access the spectrum using DSA techniques. Cognitive radios have the ability for spectrum agility based on spectrum conditions. However, in the presence of multiple heterogeneous networks and without spectrum coordination, there is a challenge related to switching between available channels to minimise interference and maximise spectrum allocation. This thesis therefore focuses on the design of a framework for coexistence management and spectrum coordination, with the objective of maximising spectrum utilisation across geographical space and across time. The amount of geographical coverage in which a frequency can be used is optimised through frequency reuse while ensuring that harmful interference is minimised. The time during which spectrum is occupied is increased through time-sharing of the same spectrum by two or more networks, while ensuring that spectrum is shared by networks that can coexist in the same spectrum and that the total channel load is not excessive to prevent spectrum starvation. Conventionally, a graph is used to model relationships between entities such as interference relationships among networks. However, the concept of an edge in a graph is not sufficient to model relationships that involve more than two entities, such as more than two networks that are able to share the same channel in the time domain, because an edge can only connect two entities. On the other hand, a hypergraph is a generalisation of an undirected graph in which a hyperedge can connect more than two entities. Therefore, this thesis investigates the use of hypergraph theory to model the RF environment and the spectrum allocation scheme. The hypergraph model was applied to an algorithm for spectrum sharing among 100 heterogeneous wireless networks, whose geo-locations were randomly and independently generated in a 50 km by 50 km area. Simulation results for spectrum utilisation performance have shown that the hypergraph-based model allocated channels, on average, to 8% more networks than the graph-based model. The results also show that, for the same RF environment, the hypergraph model requires up to 36% fewer channels to achieve, on average, 100% operational networks, than the graph model. The rate of growth of the running time of the hypergraph-based algorithm with respect to the input size is equal to the square of the input size, like the graph-based algorithm. Thus, the model achieved better performance at no additional time complexity

    Hypergraph Partitioning in the Cloud

    Get PDF
    The thesis investigates the partitioning and load balancing problem which has many applications in High Performance Computing (HPC). The application to be partitioned is described with a graph or hypergraph. The latter is of greater interest as hypergraphs, compared to graphs, have a more general structure and can be used to model more complex relationships between groups of objects such as non-symmetric dependencies. Optimal graph and hypergraph partitioning is known to be NP-Hard but good polynomial time heuristic algorithms have been proposed. In this thesis, we propose two multi-level hypergraph partitioning algorithms. The algorithms are based on rough set clustering techniques. The first algorithm, which is a serial algorithm, obtains high quality partitionings and improves the partitioning cut by up to 71\% compared to the state-of-the-art serial hypergraph partitioning algorithms. Furthermore, the capacity of serial algorithms is limited due to the rapid growth of problem sizes of distributed applications. Consequently, we also propose a parallel hypergraph partitioning algorithm. Considering the generality of the hypergraph model, designing a parallel algorithm is difficult and the available parallel hypergraph algorithms offer less scalability compared to their graph counterparts. The issue is twofold: the parallel algorithm and the complexity of the hypergraph structure. Our parallel algorithm provides a trade-off between global and local vertex clustering decisions. By employing novel techniques and approaches, our algorithm achieves better scalability than the state-of-the-art parallel hypergraph partitioner in the Zoltan tool on a set of benchmarks, especially ones with irregular structure. Furthermore, recent advances in cloud computing and the services they provide have led to a trend in moving HPC and large scale distributed applications into the cloud. Despite its advantages, some aspects of the cloud, such as limited network resources, present a challenge to running communication-intensive applications and make them non-scalable in the cloud. While hypergraph partitioning is proposed as a solution for decreasing the communication overhead within parallel distributed applications, it can also offer advantages for running these applications in the cloud. The partitioning is usually done as a pre-processing step before running the parallel application. As parallel hypergraph partitioning itself is a communication-intensive operation, running it in the cloud is hard and suffers from poor scalability. The thesis also investigates the scalability of parallel hypergraph partitioning algorithms in the cloud, the challenges they present, and proposes solutions to improve the cost/performance ratio for running the partitioning problem in the cloud. Our algorithms are implemented as a new hypergraph partitioning package within Zoltan. It is an open source Linux-based toolkit for parallel partitioning, load balancing and data-management designed at Sandia National Labs. The algorithms are known as FEHG and PFEHG algorithms

    Recommender Systems

    Get PDF
    The ongoing rapid expansion of the Internet greatly increases the necessity of effective recommender systems for filtering the abundant information. Extensive research for recommender systems is conducted by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and practical achievements, unification and comparison of different approaches are lacking, which impedes further advances. In this article, we review recent developments in recommender systems and discuss the major challenges. We compare and evaluate available algorithms and examine their roles in the future developments. In addition to algorithms, physical aspects are described to illustrate macroscopic behavior of recommender systems. Potential impacts and future directions are discussed. We emphasize that recommendation has a great scientific depth and combines diverse research fields which makes it of interests for physicists as well as interdisciplinary researchers.Comment: 97 pages, 20 figures (To appear in Physics Reports

    Proceedings of the ECCS 2005 satellite workshop: embracing complexity in design - Paris 17 November 2005

    Get PDF
    Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr). Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr)

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available
    • 

    corecore