256,297 research outputs found

    Reification of network resource control in multi-agent systems

    Get PDF
    In multi-agent systems [1], coordinated resource sharing is indispensable for a set of autonomous agents, which are running in the same execution space, to accomplish their computational objectives. This research presents a new approach to network resource control in multi-agent systems, based on the CyberOrgs [2] model. This approach aims to offer a mechanism to reify network resource control in multi-agent systems and to realize this mechanism in a prototype system. In order to achieve these objectives, a uniform abstraction vLink (Virtual Link) is introduced to represent network resource, and based on this abstraction, a coherent mechanism of vLink creation, allocation and consumption is developed. This mechanism is enforced in the network by applying a fine-grained flow-based scheduling scheme. In addition, concerns of computations are separated from those of resources required to complete them, which simplifies engineering of network resource control. Thus, application programmers are enabled to focus on their application development and separately declaring resource request and defining resource control policies for their applications in a simplified way. Furthermore, network resource is bounded to computations and controlled in a hierarchy to coordinate network resource usage. A computation and its sub-computations are not allowed to consume resources beyond their resource boundary. However, resources can be traded between different boundaries. In this thesis, the design and implementation of a prototype system is described as well. The prototype system is a middleware system architecture, which can be used to build systems supporting network resource control. This architecture has a layered structure and aims to achieve three goals: (1) providing an interface for programmers to express resource requests for applications and define their resource control policies; (2) specializing the CyberOrgs model to control network resource; and (3) providing carefully designed mechanisms for routing, link sharing and packet scheduling to enforce required resource allocation in the network

    Fuse-N: Framework for unified simulation environment for network-on-chip

    Full text link
    Steady advancements in semiconductor technology over the past few decades have marked incipience of Multi-Processor System-on-Chip (MPSoCs). Owing to the inability of traditional bus-based communication system to scale well with improving microchip technologies, researchers have proposed Network-on-Chip (NoC) as the on-chip communication model. Current uni-processor centric modeling methodology does not address the new design challenges introduced by MPSoCs, thus calling for efficient simulation frameworks capable of capturing the interplay between the application, the architecture, and the network. Addressing these new challenges requires a framework that assists the designer at different abstraction levels of system design; This thesis concentrates on developing a framework for unified simulation environment for NoCs (fuse-N) which simplifies the design space exploration for NoCs by offering a comprehensive simulation support. The framework synthesizes the network infrastructure and the communication model and optimizes application mapping for design constraints. The proposed framework is a hardware-software co-design implementation using SystemC 2.1 and C++. Simulation results show the architectural, network and resource allocation behavior and highlight the quantitative relationships between various design choices; Also, a novel off-line non-preemptive static Traffic Aware Scheduling (TAS) policy is proposed for hard NoC platforms. The proposed scheduling policy maps the application onto the NoC architecture keeping track of the network traffic, which is generated with every resource and communication path allocation. TAS has been evaluated for various design metrics such as application completion time, resource utilization and task throughput. Simulation results show significant improvements over traditional approaches

    Multi-user Resource Control with Deep Reinforcement Learning in IoT Edge Computing

    Full text link
    By leveraging the concept of mobile edge computing (MEC), massive amount of data generated by a large number of Internet of Things (IoT) devices could be offloaded to MEC server at the edge of wireless network for further computational intensive processing. However, due to the resource constraint of IoT devices and wireless network, both the communications and computation resources need to be allocated and scheduled efficiently for better system performance. In this paper, we propose a joint computation offloading and multi-user scheduling algorithm for IoT edge computing system to minimize the long-term average weighted sum of delay and power consumption under stochastic traffic arrival. We formulate the dynamic optimization problem as an infinite-horizon average-reward continuous-time Markov decision process (CTMDP) model. One critical challenge in solving this MDP problem for the multi-user resource control is the curse-of-dimensionality problem, where the state space of the MDP model and the computation complexity increase exponentially with the growing number of users or IoT devices. In order to overcome this challenge, we use the deep reinforcement learning (RL) techniques and propose a neural network architecture to approximate the value functions for the post-decision system states. The designed algorithm to solve the CTMDP problem supports semi-distributed auction-based implementation, where the IoT devices submit bids to the BS to make the resource control decisions centrally. Simulation results show that the proposed algorithm provides significant performance improvement over the baseline algorithms, and also outperforms the RL algorithms based on other neural network architectures

    Prototyping the recursive internet architecture: the IRATI project approach

    Get PDF
    In recent years, many new Internet architectures are being proposed to solve shortcomings in the current Internet. A lot of these new architectures merely extend the current TCP/IP architecture and hence do not solve the fundamental cause of these problems. The Recursive Internet Architecture (RINA) is a true new network architecture, developed from scratch, building on lessons learned in the past. RINA prototyping efforts have been ongoing since 2010, but a prototype on which a commercial RINA implementation can be built has not been developed yet. The goal of the IRATI research project is to develop and evaluate such a prototype in Linux/OS. This article focuses on the software design required to implement a network stack in Linux/OS. We motivate the placement of, and communication between, the different software components in either the kernel or user space. The first open source prototype of the IRATI implementation of RINA will be available in June 2014 for researchers, developers, and early adopters

    Recursive internetwork architecture, investigating RINA as an alternative to TCP/IP (IRATI)

    Get PDF
    Driven by the requirements of the emerging applications and networks, the Internet has become an architectural patchwork of growing complexity which strains to cope with the changes. Moore’s law prevented us from recognising that the problem does not hide in the high demands of today’s applications but lies in the flaws of the Internet’s original design. The Internet needs to move beyond TCP/IP to prosper in the long term, TCP/IP has outlived its usefulness. The Recursive InterNetwork Architecture (RINA) is a new Internetwork architecture whose fundamental principle is that networking is only interprocess communication (IPC). RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA supports inherently and without the need of extra mechanisms mobility, multi-homing and Quality of Service, provides a secure and configurable environment, motivates for a more competitive marketplace and allows for a seamless adoption. RINA is the best choice for the next generation networks due to its sound theory, simplicity and the features it enables. IRATI’s goal is to achieve further exploration of this new architecture. IRATI will advance the state of the art of RINA towards an architecture reference model and specifcations that are closer to enable implementations deployable in production scenarios. The design and implemention of a RINA prototype on top of Ethernet will permit the experimentation and evaluation of RINA in comparison to TCP/IP. IRATI will use the OFELIA testbed to carry on its experimental activities. Both projects will benefit from the collaboration. IRATI will gain access to a large-scale testbed with a controlled network while OFELIA will get a unique use-case to validate the facility: experimentation of a non-IP based Internet

    An Overview of a Grid Architecture for Scientific Computing

    Full text link
    This document gives an overview of a Grid testbed architecture proposal for the NorduGrid project. The aim of the project is to establish an inter-Nordic testbed facility for implementation of wide area computing and data handling. The architecture is supposed to define a Grid system suitable for solving data intensive problems at the Large Hadron Collider at CERN. We present the various architecture components needed for such a system. After that we go on to give a description of the dynamics by showing the task flow

    Resource Requirements for Fault-Tolerant Quantum Simulation: The Transverse Ising Model Ground State

    Full text link
    We estimate the resource requirements, the total number of physical qubits and computational time, required to compute the ground state energy of a 1-D quantum Transverse Ising Model (TIM) of N spin-1/2 particles, as a function of the system size and the numerical precision. This estimate is based on analyzing the impact of fault-tolerant quantum error correction in the context of the Quantum Logic Array (QLA) architecture. Our results show that due to the exponential scaling of the computational time with the desired precision of the energy, significant amount of error correciton is required to implement the TIM problem. Comparison of our results to the resource requirements for a fault-tolerant implementation of Shor's quantum factoring algorithm reveals that the required logical qubit reliability is similar for both the TIM problem and the factoring problem.Comment: 19 pages, 8 figure
    • …
    corecore