853 research outputs found

    JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

    Full text link
    Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE

    Curriculum Graph Machine Learning: A Survey

    Full text link
    Graph machine learning has been extensively studied in both academia and industry. However, in the literature, most existing graph machine learning models are designed to conduct training with data samples in a random order, which may suffer from suboptimal performance due to ignoring the importance of different graph data samples and their training orders for the model optimization status. To tackle this critical problem, curriculum graph machine learning (Graph CL), which integrates the strength of graph machine learning and curriculum learning, arises and attracts an increasing amount of attention from the research community. Therefore, in this paper, we comprehensively overview approaches on Graph CL and present a detailed survey of recent advances in this direction. Specifically, we first discuss the key challenges of Graph CL and provide its formal problem definition. Then, we categorize and summarize existing methods into three classes based on three kinds of graph machine learning tasks, i.e., node-level, link-level, and graph-level tasks. Finally, we share our thoughts on future research directions. To the best of our knowledge, this paper is the first survey for curriculum graph machine learning.Comment: IJCAI 2023 Survey Trac

    Research on SLM Algorithm for PAPR reduction in MB-OFDM UWB Systems

    Get PDF
    AbstractMultiband orthogonal frequency division multiplexing (MB-OFDM) is one of the key techniques of ultra wideband (UWB) systems. A major drawback of MB-OFDM technique is the high peak-to-average power ratio (PAPR) of the transmit signal. In this paper, a novel phase sequence of selected mapping algorithm which makes the side information not needed is designed to lower the PAPR of MB-OFDM UWB signals. It is also shown that comparable PAPR reduction performance with the original SLM algorithm can be achieved with a small increase in signal power. Simulation results show that there must be equilibriums between SLM computational complexity and PAPR performance. The objective of the new algorithm is to lower PAPR close to ordinary SLM technique with reduced computational complexity with little performance degradation and achieves better system resource utilization
    • …
    corecore