63,353 research outputs found

    Simulation of the Effect of Data Exchange Mode Analysis on Network Throughput

    Get PDF
    Emergence of large scale specialized networks with a large number of computers marked a new stage in network infrastructure development. In connection with the wide variety of modern network equipment used for construction of large-scale networks and increasing complexity of such networks and applications, a developer or system administrator can no longer depend mainly on intuitive decisions. Optimum network configuration for realization of concrete tasks and effective deployment of applications in modern terms is not possible without conducting a proper research with the use of specialized simulation tools. Increase in bandwidth is followed by a commensurate increase in the amount of traffic sent over the Internet. Optimizing the use and allocation of bandwidth continues to be an ongoing problem. We present a simulation model to resolve the technological challenges of increasing the efficiency of data exchange in computer networks

    Exploring Task Mappings on Heterogeneous MPSoCs using a Bias-Elitist Genetic Algorithm

    Get PDF
    Exploration of task mappings plays a crucial role in achieving high performance in heterogeneous multi-processor system-on-chip (MPSoC) platforms. The problem of optimally mapping a set of tasks onto a set of given heterogeneous processors for maximal throughput has been known, in general, to be NP-complete. The problem is further exacerbated when multiple applications (i.e., bigger task sets) and the communication between tasks are also considered. Previous research has shown that Genetic Algorithms (GA) typically are a good choice to solve this problem when the solution space is relatively small. However, when the size of the problem space increases, classic genetic algorithms still suffer from the problem of long evolution times. To address this problem, this paper proposes a novel bias-elitist genetic algorithm that is guided by domain-specific heuristics to speed up the evolution process. Experimental results reveal that our proposed algorithm is able to handle large scale task mapping problems and produces high-quality mapping solutions in only a short time period.Comment: 9 pages, 11 figures, uses algorithm2e.st

    Cloud for Gaming

    Full text link
    Cloud for Gaming refers to the use of cloud computing technologies to build large-scale gaming infrastructures, with the goal of improving scalability and responsiveness, improve the user's experience and enable new business models.Comment: Encyclopedia of Computer Graphics and Games. Newton Lee (Editor). Springer International Publishing, 2015, ISBN 978-3-319-08234-

    A network-aware framework for energy-efficient data acquisition in wireless sensor networks

    Get PDF
    Wireless sensor networks enable users to monitor the physical world at an extremely high fidelity. In order to collect the data generated by these tiny-scale devices, the data management community has proposed the utilization of declarative data-acquisition frameworks. While these frameworks have facilitated the energy-efficient retrieval of data from the physical environment, they were agnostic of the underlying network topology and also did not support advanced query processing semantics. In this paper we present KSpot+, a distributed network-aware framework that optimizes network efficiency by combining three components: (i) the tree balancing module, which balances the workload of each sensor node by constructing efficient network topologies; (ii) the workload balancing module, which minimizes data reception inefficiencies by synchronizing the sensor network activity intervals; and (iii) the query processing module, which supports advanced query processing semantics. In order to validate the efficiency of our approach, we have developed a prototype implementation of KSpot+ in nesC and JAVA. In our experimental evaluation, we thoroughly assess the performance of KSpot+ using real datasets and show that KSpot+ provides significant energy reductions under a variety of conditions, thus significantly prolonging the longevity of a WSN

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    Developed Algorithm for Increasing the Efficiency of Data Exchange in a Computer Network

    Get PDF
    This paper presents specialized means to analyze, model and research of data exchange in large-scale corporate computer network. Due to extreme complexity of corporate intranet networks and the internet has resulted in the apparent difficulties in the development of an analytical model. Thus, under these circumstances, simulation models became viable alternative to comprehend the behavior of these complex networks during data exchange. This research work examined the mode of data exchange since its perfection allows in many cases to obtain a considerable improvement of the network and also the network application performance without substantial additional expenditure. Hence, the need for this developed algorithm for increasing the efficiency of data exchange in a computer network and the appropriate topology that suite this case. Test results from the algorithm showed an average of 10 to 15% increase and occasionally 60% and above increase in data exchange efficiency without additional expenses
    • …
    corecore