109,032 research outputs found

    Fronthaul-Constrained Cloud Radio Access Networks: Insights and Challenges

    Full text link
    As a promising paradigm for fifth generation (5G) wireless communication systems, cloud radio access networks (C-RANs) have been shown to reduce both capital and operating expenditures, as well as to provide high spectral efficiency (SE) and energy efficiency (EE). The fronthaul in such networks, defined as the transmission link between a baseband unit (BBU) and a remote radio head (RRH), requires high capacity, but is often constrained. This article comprehensively surveys recent advances in fronthaul-constrained C-RANs, including system architectures and key techniques. In particular, key techniques for alleviating the impact of constrained fronthaul on SE/EE and quality of service for users, including compression and quantization, large-scale coordinated processing and clustering, and resource allocation optimization, are discussed. Open issues in terms of software-defined networking, network function virtualization, and partial centralization are also identified.Comment: 5 Figures, accepted by IEEE Wireless Communications. arXiv admin note: text overlap with arXiv:1407.3855 by other author

    On practical design for joint distributed source and network coding

    Get PDF
    This paper considers the problem of communicating correlated information from multiple source nodes over a network of noiseless channels to multiple destination nodes, where each destination node wants to recover all sources. The problem involves a joint consideration of distributed compression and network information relaying. Although the optimal rate region has been theoretically characterized, it was not clear how to design practical communication schemes with low complexity. This work provides a partial solution to this problem by proposing a low-complexity scheme for the special case with two sources whose correlation is characterized by a binary symmetric channel. Our scheme is based on a careful combination of linear syndrome-based Slepian-Wolf coding and random linear mixing (network coding). It is in general suboptimal; however, its low complexity and robustness to network dynamics make it suitable for practical implementation

    Distributed Kernel Regression: An Algorithm for Training Collaboratively

    Full text link
    This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.Comment: To be presented at the 2006 IEEE Information Theory Workshop, Punta del Este, Uruguay, March 13-17, 200

    Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets

    Full text link
    The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted. The inherent low-dimensionality of the information in this data has led neuroscientists to consider factor analysis methods to extract and analyze the underlying brain activity. In this work, we consider two recent multi-subject factor analysis methods: the Shared Response Model and Hierarchical Topographic Factor Analysis. We perform analytical, algorithmic, and code optimization to enable multi-node parallel implementations to scale. Single-node improvements result in 99x and 1812x speedups on these two methods, and enables the processing of larger datasets. Our distributed implementations show strong scaling of 3.3x and 5.5x respectively with 20 nodes on real datasets. We also demonstrate weak scaling on a synthetic dataset with 1024 subjects, on up to 1024 nodes and 32,768 cores

    Split Distributed Computing in Wireless Sensor Networks

    Get PDF
    We designed a novel method intended to improve the performance of distributed computing in wireless sensor networks. Our proposed method is designed to rapidly increase the speed of distributed computing and decrease the number of the messages required for a network to achieve the desired result. In our analysis, we chose Average consensus algorithm. In this case, the desired result is that every node achieves the average value calculated from all the initial values in the reduced number of iterations. Our method is based on the idea that a fragmentation of a network into small geographical structures which execute distributed calculations in parallel significantly affects the performance

    Game Theoretic Approaches to Massive Data Processing in Wireless Networks

    Full text link
    Wireless communication networks are becoming highly virtualized with two-layer hierarchies, in which controllers at the upper layer with tasks to achieve can ask a large number of agents at the lower layer to help realize computation, storage, and transmission functions. Through offloading data processing to the agents, the controllers can accomplish otherwise prohibitive big data processing. Incentive mechanisms are needed for the agents to perform the controllers' tasks in order to satisfy the corresponding objectives of controllers and agents. In this article, a hierarchical game framework with fast convergence and scalability is proposed to meet the demand for real-time processing for such situations. Possible future research directions in this emerging area are also discussed
    • 

    corecore