48 research outputs found

    Optimal resource scheduling for energy-efficient next generation wireless networks

    Get PDF
    Cellular networks can provide highly available and reliable communication links to the Internet of Things (IoT) applications, letting the connected Things paradigm gain much more momentum than ever. Also, the rich information collected from the Things with sensing capabilities can guide the network operator to an unforeseen direction, allowing the underlying cellular networks to be further optimized. In this regard, the cellular networks and IoT are conceived as the key components of the beyond-4G and future 5G networks. Therefore, in this dissertation, we study each of the two components in depth, focusing on how to optimize the networking resources for the quality service and better energy-efficiency. To begin with, we study the heterogeneous cellular network architecture which is a major enhancement to the current 4G network by means of the base station (BS) densification and traffic offloading. In particular, the densely deployed short-range, low-power smallcell base stations (SBSs) can significantly improve the frequency reuse, throughput performance and the energy-efficiency. We then study the heterogeneous C-RAN (cloud radio access network), which is one of the core enablers of the next generation 5G cellular networks. In particular, with the high availability provided by the long-range macro BS (MBS), the heterogeneous C-RAN (H-CRAN) can effectively enhance the overall resource utilization compared to the conventional C-RANs. In each study, we propose an optimal resource scheduling and service provisioning scheme to provide a quality service to users in a resource-efficient manner. In addition, we carry out two studies for the Internet of Things (IoT) networks operating with the IEEE 802.11ah standard. Specifically, we introduce energy-efficient device management algorithms for the battery-operated, resource-constrained IoT sensor devices to prolong their lifetime by optimally scheduling their activation. The enhanced power saving mechanism and the optimal sensing algorithm that we propose in each study can effectively improve both the energy-efficiency of the IoT devices and the lifetime of the entire network

    Optimal Container Migration for Mobile Edge Computing: Algorithm, System Design and Implementation

    Get PDF
    Edge computing is a promising alternative to cloud computing for offloading computationally heavy tasks from resource-constrained mobile user devices. Placed at the edge of the network, edge computing is particularly advantageous to delay-limited applications for having a short distance to end- users. However, when a mobile user moves away from the service coverage of the associated edge server, the advantage gradually vanishes, increasing response time. Although service migration has been studied to address this problem focusing on minimizing the service downtime, both zero-downtime and the amount of traffic generated as a result of migration need further study. In this paper, an optimal live migration for containerized edge computing service is studied. This paper presents three zero-downtime migration techniques based on state duplication and state reproduction techniques, and then, proposes an optimal migration technique selection algorithm that jointly minimizes the response time and network traffic during migration. For validation and performance comparison, the proposed migration techniques are implemented on off-the-shelf hardware with Linux operating system. The evaluation results showed that compared with a naive migration, the optimal approach reduced the response time and network load by at least 74.75% and 94.79%, respectively, under considered scenarios

    A Machine with Short-Term, Episodic, and Semantic Memory Systems

    Full text link
    Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems, each of which is modeled with a knowledge graph. To evaluate this system and analyze the behavior of this agent, we designed and released our own reinforcement learning agent environment, "the Room", where an agent has to learn how to encode, store, and retrieve memories to maximize its return by answering questions. We show that our deep Q-learning based agent successfully learns whether a short-term memory should be forgotten, or rather be stored in the episodic or semantic memory systems. Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment

    A Machine With Human-Like Memory Systems

    Full text link
    Inspired by the cognitive science theory, we explicitly model an agent with both semantic and episodic memory systems, and show that it is better than having just one of the two memory systems. In order to show this, we have designed and released our own challenging environment, "the Room", compatible with OpenAI Gym, where an agent has to properly learn how to encode, store, and retrieve memories to maximize its rewards. The Room environment allows for a hybrid intelligence setup where machines and humans can collaborate. We show that two agents collaborating with each other results in better performance than one agent acting alone. We have open-sourced our code and models at https://github.com/tae898/explicit-memory.Comment: Submitted to Human-Centered Design of Symbiotic Hybrid Intelligence 2022 (https://ii.tudelft.nl/humancenteredsymbioticHI/

    PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts

    Full text link
    PromptSource is a system for creating, sharing, and using natural language prompts. Prompts are functions that map an example from a dataset to a natural language input and target output. Using prompts to train and query language models is an emerging area in NLP that requires new tools that let users develop and refine these prompts collaboratively. PromptSource addresses the emergent challenges in this new setting with (1) a templating language for defining data-linked prompts, (2) an interface that lets users quickly iterate on prompt development by observing outputs of their prompts on many examples, and (3) a community-driven set of guidelines for contributing new prompts to a common pool. Over 2,000 prompts for roughly 170 datasets are already available in PromptSource. PromptSource is available at https://github.com/bigscience-workshop/promptsource.Comment: ACL 2022 Dem

    Multitask Prompted Training Enables Zero-Shot Task Generalization

    Get PDF
    International audienceLarge language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that this is a consequence of implicit multitask learning in language models’ pretraining (Radford et al., 2019). Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping any natural language tasks into a human-readable prompted form. We convert a large set of supervised datasets, each with multiple prompts with diverse wording. These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. We fine-tune a pre-trained encoder-decoder model (Raffel et al., 2020; Lester et al., 2021) on this multitask mixture covering a wide variety of tasks. The model attains strong zero-shot performance on several standard datasets, often outperforming models up to 16x its size. Further, our approach attains strong performance on a subset of tasks from the BIG-bench benchmark, outperforming models up to 6x its size. All trained models are available at https://github.com/bigscience-workshop/t-zero, and all prompts are available at https://github.com/bigscience-workshop/promptsource

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Optimal resource scheduling for energy-efficient next generation wireless networks

    Get PDF
    Cellular networks can provide highly available and reliable communication links to the Internet of Things (IoT) applications, letting the connected Things paradigm gain much more momentum than ever. Also, the rich information collected from the Things with sensing capabilities can guide the network operator to an unforeseen direction, allowing the underlying cellular networks to be further optimized. In this regard, the cellular networks and IoT are conceived as the key components of the beyond-4G and future 5G networks. Therefore, in this dissertation, we study each of the two components in depth, focusing on how to optimize the networking resources for the quality service and better energy-efficiency. To begin with, we study the heterogeneous cellular network architecture which is a major enhancement to the current 4G network by means of the base station (BS) densification and traffic offloading. In particular, the densely deployed short-range, low-power smallcell base stations (SBSs) can significantly improve the frequency reuse, throughput performance and the energy-efficiency. We then study the heterogeneous C-RAN (cloud radio access network), which is one of the core enablers of the next generation 5G cellular networks. In particular, with the high availability provided by the long-range macro BS (MBS), the heterogeneous C-RAN (H-CRAN) can effectively enhance the overall resource utilization compared to the conventional C-RANs. In each study, we propose an optimal resource scheduling and service provisioning scheme to provide a quality service to users in a resource-efficient manner. In addition, we carry out two studies for the Internet of Things (IoT) networks operating with the IEEE 802.11ah standard. Specifically, we introduce energy-efficient device management algorithms for the battery-operated, resource-constrained IoT sensor devices to prolong their lifetime by optimally scheduling their activation. The enhanced power saving mechanism and the optimal sensing algorithm that we propose in each study can effectively improve both the energy-efficiency of the IoT devices and the lifetime of the entire network.</p

    Emoberta: Speaker-aware emotion recognition in conversation with Roberta

    No full text
    We present EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa, a simple yet expressive scheme of solving the ERC (emotion recognition in conversation) task. By simply prepending speaker names to utterances and inserting separation tokens between the utterances in a dialogue, EmoBERTa can learn intra- and inter- speaker states and context to predict the emotion of a current speaker, in an end-to-end manner. Our experiments show that we reach a new state of the art on the two popular ERC datasets using a basic and straight-forward approach. We've open sourced our code and model
    corecore