2,573 research outputs found

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    A Planning and Optimization Framework for Hybrid Ultra-Dense Network Topologies

    Get PDF
    The deployment of small cells has been a critical upgrade in Fourth Generation (4G) mobile networks as they provide macrocell traffic offloading gains, improved spectrum reuse and reduce coverage holes. The need for small cells will be even more critical in Fifth Generation (5G) networks due to the introduction of higher spectrum bands, which necessitate denser network deployments to support larger traffic volumes per unit area. A network densification scenario envisioned for evolved fourth and fifth generation networks is the deployment of Ultra-Dense Networks (UDNs) with small cell site densities exceeding 90 sites/km2 (or inter-site distances of less than 112 m). The careful planning and optimization of ultra-dense networks topologies have been known to significantly improve the achievable performance compared to completely random (unplanned) ultra-dense network deployments by various third-part stakeholders (e.g. home owners). However, these well-planned and optimized ultra-dense network deployments are difficult to realize in practice due to various constraints, such as limited or no access to preferred optimum small cell site locations in a given service area. The hybrid ultra-dense network topologies provide an interesting trade-off, whereby, an ultra-dense network may constitute a combination of operator optimized small cell deployments that are complemented by random small cell deployments by third-parties. In this study, an ultra-dense network multiobjective optimization framework and post-deployment power optimization approach are developed for realization and performance comparison of random, optimized and hybrid ultra-dense network topologies in a realistic urban case study area. The results of the case study demonstrate how simple transmit power optimization enable hybrid ultra-dense network topologies to achieve performance almost comparable to optimized topologies whilst also providing the convenience benefits of random small cell deployments

    Intercell interference mitigation in long term evolution (LTE) and LTE-advanced

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Bandwidth is one of the limited resources in Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks. Therefore, new resource allocation techniques such as the frequency reuse are needed to increase the capacity in LTE and LTE-A. However, the system performance is severely degraded using the same frequency in adjacent cells due to increase of intercell interference. Therefore, the intercell interference management is a critical point to improve the performance of the cellular mobile networks. This thesis aims to mitigate intercell interference in the downlink LTE and LTE-A networks. The first part of this thesis introduces a new intercell interference coordination scheme to mitigate downlink intercell interference in macrocell-macrocell scenario based on user priority and using fuzzy logic system (FLS). A FLS is an expert system which maps the inputs to outputs using “IF...THEN” rules and an aggregation method. Then, the final output is obtained through a deffuzifaction approach. Since this thesis aims to mitigate interference in downlink LTE networks, the inputs of FLS are selected from important metrics such as throughput, signal to interference plus noise ratio and so on. Simulation results demonstrate the efficacy of the proposed scheme to improve the system performance in terms of cell throughput, cell edge throughput and delay when compared with reuse factor one. Thereafter, heterogeneous networks (HetNets) are studied which are used to increase the coverage and capacity of system. The focus of the next part of this thesis is picocell because it is one of the important low power nodes in HetNets which can efficiently improve the overall system capacity and coverage. However, new challenges arise to intercell interference management in macrocell-picocell scenario. Three enhanced intercell interference coordination (eICIC) schemes are proposed in this thesis to mitigate the interference problem. In the first scheme, a dynamic cell range expansion (CRE) approach is combined with a dynamic almost blank subframe (ABS) using fuzzy logic system. In the second scheme, a fuzzy q-learning (FQL) approach is used to find the optimum ABS and CRE offset values for both full buffer traffic and video streaming traffic. In FQL, FLS is combined by q-learning approach to optimally select the best consequent part of each FLS rule. In the third proposed eICIC scheme, the best location of ABSs in each frame is determined using Genetic Algorithm such that the requirements of video streaming traffic can be met. Simulation results show that the system performance can be improved through the proposed schemes. Finally, the optimum CRE offset value and the required number of ABSs will be mathematically formulated based on the outage probability, ergodic rate and minimum required throughput of users using stochastic geometry tool. The results are an analytical formula that leads to a good initial estimate through a simple approach to analyse the impact of system parameters on CRE offset value and number of ABSs

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    A proposal of a minimal-state processing search algorithm for isochronous channel reuse problems in DQDB networks

    Get PDF
    The IEEE 802.6 MAC standard protocol defines the distributed-queue dual bus (DQDB) for metropolitan area networks (MANs). The isochronous channel reuse problem (ICRP) has been studied for the efficient use of DQDB. Given a set of established connections and a set of connection requests, the goal of ICRP is to maximize the number of satisfied requests by finding a proper channel assignment, such that no established connection is not only reassigned a channel, but also any pair of active connections does not interfere each other. We propose a minimal-state processing search algorithm for ICRP (MIPS/sub -/ICRP). The simulation results show that MIPS/sub -/ICRP always provides near-optimum solutions

    Optimal channel assignment and power control in wireless cellular networks

    Get PDF
    Wireless mobile communication is a fast growing field in current telecommunication industry. In a wireless cellular network, channel assignment is a mechanism that assigns channels to mobile users in order to establish a communication between a mobile terminal and a base station. It is important to determine an optimal allocation of channels that makes effective use of channels and minimizes call-blocking and call-dropping probabilities. Another important issue, the power control, is a problem of determining an optimal allocation of power levels to transmitters such that the power consumption is minimized while signal quality is maintained. In wireless mobile networks, channels and transmitter powers are limited resources. Therefore, efficient utilization of both those resources can significantly increase the capacity of network. In this thesis, we solve such optimizations by the hybrid channel assignment (HCA) method using integer linear programming (ILP). Two novel sets of ILP formulation are proposed for two different cases: Reuse Distance based HCA without power control, and Carrier-to-Interference Ratio based HCA combined with power control. For each of them, our experimental results show an improvement over other several approaches

    Multiuser MIMO-OFDM for Next-Generation Wireless Systems

    No full text
    This overview portrays the 40-year evolution of orthogonal frequency division multiplexing (OFDM) research. The amelioration of powerful multicarrier OFDM arrangements with multiple-input multiple-output (MIMO) systems has numerous benefits, which are detailed in this treatise. We continue by highlighting the limitations of conventional detection and channel estimation techniques designed for multiuser MIMO OFDM systems in the so-called rank-deficient scenarios, where the number of users supported or the number of transmit antennas employed exceeds the number of receiver antennas. This is often encountered in practice, unless we limit the number of users granted access in the base station’s or radio port’s coverage area. Following a historical perspective on the associated design problems and their state-of-the-art solutions, the second half of this treatise details a range of classic multiuser detectors (MUDs) designed for MIMO-OFDM systems and characterizes their achievable performance. A further section aims for identifying novel cutting-edge genetic algorithm (GA)-aided detector solutions, which have found numerous applications in wireless communications in recent years. In an effort to stimulate the cross pollination of ideas across the machine learning, optimization, signal processing, and wireless communications research communities, we will review the broadly applicable principles of various GA-assisted optimization techniques, which were recently proposed also for employment inmultiuser MIMO OFDM. In order to stimulate new research, we demonstrate that the family of GA-aided MUDs is capable of achieving a near-optimum performance at the cost of a significantly lower computational complexity than that imposed by their optimum maximum-likelihood (ML) MUD aided counterparts. The paper is concluded by outlining a range of future research options that may find their way into next-generation wireless systems

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    • …
    corecore