3,435 research outputs found

    Fog-enabled Edge Learning for Cognitive Content-Centric Networking in 5G

    Full text link
    By caching content at network edges close to the users, the content-centric networking (CCN) has been considered to enforce efficient content retrieval and distribution in the fifth generation (5G) networks. Due to the volume, velocity, and variety of data generated by various 5G users, an urgent and strategic issue is how to elevate the cognitive ability of the CCN to realize context-awareness, timely response, and traffic offloading for 5G applications. In this article, we envision that the fundamental work of designing a cognitive CCN (C-CCN) for the upcoming 5G is exploiting the fog computing to associatively learn and control the states of edge devices (such as phones, vehicles, and base stations) and in-network resources (computing, networking, and caching). Moreover, we propose a fog-enabled edge learning (FEL) framework for C-CCN in 5G, which can aggregate the idle computing resources of the neighbouring edge devices into virtual fogs to afford the heavy delay-sensitive learning tasks. By leveraging artificial intelligence (AI) to jointly processing sensed environmental data, dealing with the massive content statistics, and enforcing the mobility control at network edges, the FEL makes it possible for mobile users to cognitively share their data over the C-CCN in 5G. To validate the feasibility of proposed framework, we design two FEL-advanced cognitive services for C-CCN in 5G: 1) personalized network acceleration, 2) enhanced mobility management. Simultaneously, we present the simulations to show the FEL's efficiency on serving for the mobile users' delay-sensitive content retrieval and distribution in 5G.Comment: Submitted to IEEE Communications Magzine, under review, Feb. 09, 201

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    A Survey of Graph Pre-processing Methods: From Algorithmic to Hardware Perspectives

    Full text link
    Graph-related applications have experienced significant growth in academia and industry, driven by the powerful representation capabilities of graph. However, efficiently executing these applications faces various challenges, such as load imbalance, random memory access, etc. To address these challenges, researchers have proposed various acceleration systems, including software frameworks and hardware accelerators, all of which incorporate graph pre-processing (GPP). GPP serves as a preparatory step before the formal execution of applications, involving techniques such as sampling, reorder, etc. However, GPP execution often remains overlooked, as the primary focus is directed towards enhancing graph applications themselves. This oversight is concerning, especially considering the explosive growth of real-world graph data, where GPP becomes essential and even dominates system running overhead. Furthermore, GPP methods exhibit significant variations across devices and applications due to high customization. Unfortunately, no comprehensive work systematically summarizes GPP. To address this gap and foster a better understanding of GPP, we present a comprehensive survey dedicated to this area. We propose a double-level taxonomy of GPP, considering both algorithmic and hardware perspectives. Through listing relavent works, we illustrate our taxonomy and conduct a thorough analysis and summary of diverse GPP techniques. Lastly, we discuss challenges in GPP and potential future directions
    • …
    corecore