6 research outputs found

    Operating Task Redistribution in Hyperconverged Networks

    Get PDF
    In this article, a searching method for the rational task distribution through the nodes of a hyperconverged network is presented in which it provides the rational distribution of task sets towards a better performance. With using new subsettings related to distribution of nodes in the network based on distributed processing, we can minimize average packet delay. The distribution quality is provided with using a special objective function considering the penalties in the case of having delays. This process is considered in order to create the balanced delivery systems. The initial redistribution is determined based on the minimum penalty. After performing a cycle (iteration) of redistribution in order to have the appropriate task distribution, a potential system is formed for functional optimization. In each cycle of the redistribution, a rule for optimizing contour search is used. Thus, the obtained task distribution, including the appeared failure and success, will be rational and can decrease the average packet delay in the hyperconverged networks. The effectiveness of our proposed method is evaluated by using the model of hyperconverged support system of the university E-learning provided by V.N. Karazin Kharkiv National University. The simulation results based on the model clearly confirm the acceptable and better performance of our approach in comparison to the classical approach of task distribution

    Comparative analysis of LTE backbone transport techniques for efficient broadband penetration in a heterogeneous network morphology

    Get PDF
    In the bid to bring about a solution to the nagging problem associated with the provision of ubiquitous broadband access, Next Generation Network (NGN) popularly referred to as Long Term Evolution (LTE) network with appropriate network integration technique is recommended as solution. Currently, Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) is the transport technique in LTE backbone infrastructure. This technique, however, suffers significantly in the event of failure of IP path resulting in delay and packet loss budgets across the network.The resultant effect is degradation in users’ quality of service (QoS) experience with real-time services.  A competitive alternative is the Internet Protocol /Asynchronous Transfer Mode (IP/ATM). This transport technique provides great dynamism in the allocation of bandwidth and supports varying requests of multimedia connections with diverse QoS requirements. This paper, therefore, seeks to evaluate the performance of these two transport techniques in a bid to establish the extent to which the latter technique ameliorates the aforementioned challenges suffered by the previous technique. Results from the simulation show that the IP/ATM transport scheme is superior to the IP/MPLS scheme in terms of average bandwidth utilization, mean traffic drop and mean traffic delay in the ratio of 9.8, 8.7 and 1.0% respectively

    Priority based energy efficient hybrid cluster routing protocol for underwater wireless sensor network

    Get PDF
    A little change in the environment that goes unnoticed in an underwater communication network might lead to calamity. A little alteration in the environment must also be adequately analyzed in order to deal with a potential crisis. A priority-based routing protocol is required to ensure that the vital data perceived by the sensor about the environment changes. The priority-based routing system guarantees that vital data packets are delivered at a quicker pace to the destination or base station for further processing. In this work, we present a priority-based routing protocol based on the energy efficient hybrid cluster routing protocol (EEHRCP) algorithm. The suggested approach keeps two distinct queues for lower and higher priority data packets. In order to ensure that these packets get at their destination without any information loss and at a quicker rate, all of the crucial sensed data is passed through a higher priority queue. Test findings show that the suggested technique increases throughput, delivery percentage, and reduces latency for the crucial data packets
    corecore