515 research outputs found

    Mobile Multiuser Detection Technique

    Get PDF
    In mobile / cellular networks the multiuser detection technology emerged in early 80s. it is now developed in to an important full-fledged field in multi-access communication. In the conventional single user detector in DS-CDMA system, MAI and near-far effect cause limitation of capacity. On the other hand the optimal MUD suffers from computational complexity that grows exponentially with number active user. During a last two decade there has been a lot of interest of sub optimal multiuser detector which are low in complexity but deliver negotiable performance. This topic highlighted various detection techniques. As in Multiuser MIMO system a base station equipped with multiple antennas serves a number of users. Conventionally the communication between the BS and the user is performed by orthogonalizing the channel so that the BS communicates with each user in separate time frequency resources. This is not optimal from an information theoretic point of view and high rate can be obtained, if the BS communicates with several users in same time frequency response. DOI: 10.17762/ijritcc2321-8169.15082

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Digital Signal Processing Research Program

    Get PDF
    Contains table of contents for Section 2, an introduction, reports on twenty research projects and a list of publications.Lockheed Sanders, Inc. Contract BZ4962U.S. Army Research Laboratory Grant QK-8819U.S. Navy - Office of Naval Research Grant N00014-93-1-0686National Science Foundation Grant MIP 95-02885U.S. Navy - Office of Naval Research Grant N00014-95-1-0834U.S. Navy - Office of Naval Research Grant N00014-96-1-0930U.S. Navy - Office of Naval Research Grant N00014-95-1-0362National Defense Science and Engineering FellowshipU.S. Air Force - Office of Scientific Research Grant F49620-96-1-0072National Science Foundation Graduate Research Fellowship Grant MIP 95-02885Lockheed Sanders, Inc. Grant N00014-93-1-0686National Science Foundation Graduate FellowshipU.S. Army Research Laboratory/ARL Advanced Sensors Federated Lab Program Contract DAAL01-96-2-000

    Resource allocation technique for powerline network using a modified shuffled frog-leaping algorithm

    Get PDF
    Resource allocation (RA) techniques should be made efficient and optimized in order to enhance the QoS (power & bit, capacity, scalability) of high-speed networking data applications. This research attempts to further increase the efficiency towards near-optimal performance. RA’s problem involves assignment of subcarriers, power and bit amounts for each user efficiently. Several studies conducted by the Federal Communication Commission have proven that conventional RA approaches are becoming insufficient for rapid demand in networking resulted in spectrum underutilization, low capacity and convergence, also low performance of bit error rate, delay of channel feedback, weak scalability as well as computational complexity make real-time solutions intractable. Mainly due to sophisticated, restrictive constraints, multi-objectives, unfairness, channel noise, also unrealistic when assume perfect channel state is available. The main goal of this work is to develop a conceptual framework and mathematical model for resource allocation using Shuffled Frog-Leap Algorithm (SFLA). Thus, a modified SFLA is introduced and integrated in Orthogonal Frequency Division Multiplexing (OFDM) system. Then SFLA generated random population of solutions (power, bit), the fitness of each solution is calculated and improved for each subcarrier and user. The solution is numerically validated and verified by simulation-based powerline channel. The system performance was compared to similar research works in terms of the system’s capacity, scalability, allocated rate/power, and convergence. The resources allocated are constantly optimized and the capacity obtained is constantly higher as compared to Root-finding, Linear, and Hybrid evolutionary algorithms. The proposed algorithm managed to offer fastest convergence given that the number of iterations required to get to the 0.001% error of the global optimum is 75 compared to 92 in the conventional techniques. Finally, joint allocation models for selection of optima resource values are introduced; adaptive power and bit allocators in OFDM system-based Powerline and using modified SFLA-based TLBO and PSO are propose

    Progressive feature transmission for split classification at the wireless edge

    Get PDF
    We consider the scenario of inference at the wire-less edge , in which devices are connected to an edge server and ask the server to carry out remote classification, that is, classify data samples available at edge devices. This requires the edge devices to upload high-dimensional features of samples over resource-constrained wireless channels, which creates a communication bottleneck. The conventional feature pruning solution would require the device to have access to the inference model, which is not available in the current split inference scenario. To address this issue, we propose the progressive feature transmission (ProgressFTX) protocol, which minimizes the overhead by progressively transmitting features until a target confidence level is reached. A control policy is proposed to accelerate inference, comprising two key operations: importance-aware feature selection at the server and transmission-termination control . For the former, it is shown that selecting the most important features, characterized by the largest discriminant gains of the corresponding feature dimensions, achieves a sub-optimal performance. For the latter, the proposed policy is shown to exhibit a threshold structure. Specifically, the transmission is stopped when the incremental uncertainty reduction by further feature transmission is outweighed by its communication cost. The indices of the selected features and transmission decision are fed back to the device in each slot. The control policy is first derived for the tractable case of linear classification, and then extended to the more complex case of classification using a convolutional neural network . Both Gaussian and fading channels are considered. Experimental results are obtained for both a statistical data model and a real dataset. It is shown that ProgressFTX can substantially reduce the communication latency compared to conventional feature pruning and random feature transmission strategies
    • …
    corecore