477 research outputs found

    Game among Interdependent Networks: The Impact of Rationality on System Robustness

    Full text link
    Many real-world systems are composed of interdependent networks that rely on one another. Such networks are typically designed and operated by different entities, who aim at maximizing their own payoffs. There exists a game among these entities when designing their own networks. In this paper, we study the game investigating how the rational behaviors of entities impact the system robustness. We first introduce a mathematical model to quantify the interacting payoffs among varying entities. Then we study the Nash equilibrium of the game and compare it with the optimal social welfare. We reveal that the cooperation among different entities can be reached to maximize the social welfare in continuous game only when the average degree of each network is constant. Therefore, the huge gap between Nash equilibrium and optimal social welfare generally exists. The rationality of entities makes the system inherently deficient and even renders it extremely vulnerable in some cases. We analyze our model for two concrete systems with continuous strategy space and discrete strategy space, respectively. Furthermore, we uncover some factors (such as weakening coupled strength of interdependent networks, designing suitable topology dependency of the system) that help reduce the gap and the system vulnerability

    Confidant: Customizing Transformer-based LLMs via Collaborative Edge Training

    Full text link
    Transformer-based large language models (LLMs) have demonstrated impressive capabilities in a variety of natural language processing (NLP) tasks. Nonetheless, it is challenging to deploy and fine-tune LLMs on mobile edge devices with limited computing, memory, and energy budgets. In this paper, we propose Confidant, a multi-backend collaborative training framework for customizing state-of-the-art LLMs on commodity mobile devices like smartphones. Confidant partitions an LLM into several sub-models so that each fits into a mobile device's memory. A pipeline parallel training mechanism is further developed to ensure fast and efficient distributed training. In addition, we propose a novel backend scheduler to allocate different attention heads to heterogeneous compute hardware, including mobile CPU and GPUs, to maximize the compute resource utilization on each edge device. Our preliminary experimental results show that Confidant achieves at most 45.3% memory reduction and 8.03x inference speedup in practical settings.Comment: 6 pages, 7 figures; Submitted to HotMobile 202
    • …
    corecore