837 research outputs found

    Non-linear echo cancellation - a Bayesian approach

    Get PDF
    Echo cancellation literature is reviewed, then a Bayesian model is introduced and it is shown how how it can be used to model and fit nonlinear channels. An algorithm for cancellation of echo over a nonlinear channel is developed and tested. It is shown that this nonlinear algorithm converges for both linear and nonlinear channels and is superior to linear echo cancellation for canceling an echo through a nonlinear echo-path channel

    Content Placement in Cache-Enabled Sub-6 GHz and Millimeter-Wave Multi-antenna Dense Small Cell Networks

    Get PDF
    This paper studies the performance of cache-enabled dense small cell networks consisting of multi-antenna sub-6 GHz and millimeter-wave base stations. Different from the existing works which only consider a single antenna at each base station, the optimal content placement is unknown when the base stations have multiple antennas. We first derive the successful content delivery probability by accounting for the key channel features at sub-6 GHz and mmWave frequencies. The maximization of the successful content delivery probability is a challenging problem. To tackle it, we first propose a constrained cross-entropy algorithm which achieves the near-optimal solution with moderate complexity. We then develop another simple yet effective heuristic probabilistic content placement scheme, termed two-stair algorithm, which strikes a balance between caching the most popular contents and achieving content diversity. Numerical results demonstrate the superior performance of the constrained cross-entropy method and that the two-stair algorithm yields significantly better performance than only caching the most popular contents. The comparisons between the sub-6 GHz and mmWave systems reveal an interesting tradeoff between caching capacity and density for the mmWave system to achieve similar performance as the sub-6 GHz system.Comment: 14 pages; Accepted to appear in IEEE Transactions on Wireless Communication

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Efficient Approximation Algorithms for Multi-Antennae Largest Weight Data Retrieval

    Full text link
    In a mobile network, wireless data broadcast over mm channels (frequencies) is a powerful means for distributed dissemination of data to clients who access the channels through multi-antennae equipped on their mobile devices. The δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem is to compute a schedule for downloading a subset of data items that has a maximum total weight using δ\delta antennae in a given time interval. In this paper, we propose a ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation algorithm for the δ\delta-antennae largest weight data retrieval (δ\deltaALWDR) problem that has the same ratio as the known result but a significantly improved time complexity of O(21ϵ1ϵm7T3.5L)O(2^{\frac{1}{\epsilon}}\frac{1}{\epsilon}m^{7}T^{3.5}L) from O(ϵ3.5m3.5ϵT3.5L)O(\epsilon^{3.5}m^{\frac{3.5}{\epsilon}}T^{3.5}L) when δ=1\delta=1 \cite{lu2014data}. To our knowledge, our algorithm represents the first ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation solution to δ\deltaALWDR for the general case of arbitrary δ\delta. To achieve this, we first give a ratio 1−1e1-\frac{1}{e} algorithm for the γ\gamma-separated δ\deltaALWDR (δ\deltaAγ\gammaLWDR) with runtime O(m7T3.5L)O(m^{7}T^{3.5}L), under the assumption that every data item appears at most once in each segment of δ\deltaAγ\gammaLWDR, for any input of maximum length LL on mm channels in TT time slots. Then, we show that we can retain the same ratio for δ\deltaAγ\gammaLWDR without this assumption at the cost of increased time complexity to O(2γm7T3.5L)O(2^{\gamma}m^{7}T^{3.5}L). This result immediately yields an approximation solution of same ratio and time complexity for δ\deltaALWDR, presenting a significant improvement of the known time complexity of ratio 1−1e−ϵ1-\frac{1}{e}-\epsilon approximation to the problem

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    Advanced Protocols for Peer-to-Peer Data Transmission in Wireless Gigabit Networks

    Get PDF
    This thesis tackles problems on IEEE 802.11 MAC layer, network layer and application layer, to further push the performance of wireless P2P applications in a holistic way. It contributes to the better understanding and utilization of two major IEEE 802.11 MAC features, frame aggregation and block acknowledgement, to the design and implementation of opportunistic networks on off-the-shelf hardware and proposes a document exchange protocol, including document recommendation. First, this thesis contributes a measurement study of the A-MPDU frame aggregation behavior of IEEE 802.11n in a real-world, multi-hop, indoor mesh testbed. Furthermore, this thesis presents MPDU payload adaptation (MPA) to utilize A-MPDU subframes to increase the overall throughput under bad channel conditions. MPA adapts the size of MAC protocol data units to channel conditions, to increase the throughput and lower the delay in error-prone channels. The results suggest that under erroneous conditions throughput can be maximized by limiting the MPDU size. As second major contribution, this thesis introduces Neighborhood-aware OPPortunistic networking on Smartphones (NOPPoS). NOPPoS creates an opportunistic, pocket-switched network using current generation, off-the-shelf mobile devices. As main novel feature, NOPPoS is highly responsive to node mobility due to periodic, low-energy scans of its environment, using Bluetooth Low Energy advertisements. The last major contribution is the Neighborhood Document Sharing (NDS) protocol. NDS enables users to discover and retrieve arbitrary documents shared by other users in their proximity, i.e. in the communication range of their IEEE 802.11 interface. However, IEEE 802.11 connections are only used on-demand during file transfers and indexing of files in the proximity of the user. Simulations show that NDS interconnects over 90 \% of all devices in communication range. Finally, NDS is extended by the content recommendation system User Preference-based Probability Spreading (UPPS), a graph-based approach. It integrates user-item scoring into a graph-based tag-aware item recommender system. UPPS utilizes novel formulas for affinity and similarity scoring, taking into account user-item preference in the mass diffusion of the recommender system. The presented results show that UPPS is a significant improvement to previous approaches

    Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence

    Get PDF
    IEEE Access Volume 3, 2015, Article number 7217798, Pages 1512-1530 Open Access Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article) Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc a Department of Information Engineering, University of Padua, Padua, Italy b Department of General Psychology, University of Padua, Padua, Italy c IRCCS San Camillo Foundation, Venice-Lido, Italy View additional affiliations View references (107) Abstract In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network
    • …
    corecore