4 research outputs found

    M2M communication performance for a noisy channel based on latency-aware source-based LTE network measurements

    No full text
    The phrase ''Machine-to-Machine'' (M2M) communication has gained widespread usage owing to the growing understanding of the Internet of Things. In the upcoming years, it is anticipated that the number of M2M devices would rise significantly due to the growth in M2M applications. Enhancing cellular networks to manage M2M and human-to-human (H2H) conversations is an ongoing endeavor. Integrating M2M communication into regular H2H communication is a key objective because of the steady increase in the number of M2M devices that exhibit the unique properties of M2M traffic. It is anticipated that H2H and M2M communications will make use of the same LTE resource in order to maximize its efficiency. Therefore, in order to manage an LTE network system that includes M2M devices and H2H users, an effective resource scheduler is required. Using a priority queuing model, this study develops analytical formulas to assess the suggested programs' performance based on average waiting times, average system delays, and average system numbers for M2M and H2H users. Based on the data analysis, it is feasible to maintain suitable levels of quality of service (QoS) for H2H clients while providing good QoS to M2M consumers. Users in the M2M queue have their demands addressed by using a relaxation strategy. Quality of Service for H2H consumers' delay-sensitive traffic a system is utilized to classify H2H traffic into two basic types. Only Level 2 H2H should apply the relax technique. The findings demonstrate that the suggested solutions improve M2M performance and efficiently manage QoS for H2H services

    Increasing efficiency for routing in internet of things using Binary Gray Wolf Optimization and fuzzy logic

    No full text
    In the field of information and communication technology, the Internet of Things is regarded as a brand-new and important technology. The introduction of new protocols in this area is caused by the presence of devices in these networks with constrained resources and relatively low computing power. One of the most well-known routing protocols for low-power devices is the RPL protocol. This algorithm cannot take into account all of the required routing goals at once. This article introduces a proposed data-oriented RPL algorithm that divides data during routing according to their content. This can decrease the amount of duplicate data transferred through the network, shorten the communication system's delay, conserve the node's limited energy, and prolong the network's lifespan. The effectiveness of RPL can be increased by selecting the best route utilizing the Binary Gray Wolf Optimization. The best parent node in the routing procedure is chosen using an objective function during the tree construction phase. This objective function is built using fuzzy logic and the Binary Gray Wolf Optimization in the suggested technique. The results of Matlab 2022a and OMNET environment tests have shown that the proposed method has increased the efficiency of energy consumption and reduced the period of instability and end-to-end delay. that the ratio of the instability period in the proposed method is much less than the other three methods, so that the ratio of the instability period is 57% for the proposed method in the ORPL and Qos RPL methods, it is equal to 80%, and in the RPL method it is equal to 89%. This problem shows that the proposed method is more stable, or, in other words, it has been active for a longer period of time with the maximum number of nodes

    An efficient approach for multi-label classification based on Advanced Kernel-Based Learning System

    No full text
    The importance of data quality and quantity cannot be overstated in automatic data analysis systems. An important factor to take into account is the capability to assign a data item to many classes. In Lithuania, there is currently no mechanism for classifying textual data that permits allocating a data item to multiple classes. Multi-label categorization learning offers a multi-dimensional viewpoint for objects with several meanings and has emerged as a prominent area of study in machine learning in recent times. Within the context of big data, it is imperative to develop a high-speed and effective algorithm for multi-label classification. This paper utilized the Machine Learning Advanced Kernel-Based Learning System for Multi-Label Classification Problem (ML-AKLS) to eliminate the need for repetitive learning operations. Concurrently, a thresholding function that is both dynamic and self-adaptive was developed to address the conversion from the ML-AKLS network's actual value outputs to a binary multi-label vector. ML-AKLS offers the ideal solution with the least squares method, requiring less parameters to be set. It ensures steady execution, faster convergence speed, and superior generalization performance. Extensive experiments in multi-label classification were conducted on datasets of varying scales. The comparative analysis reveals that ML-AKLS has superior performance when applied to extensive datasets characterized by high-dimensional sample features

    DLJSF: Data-Locality Aware Job Scheduling IoT tasks in fog-cloud computing environments

    No full text
    Problem statement: Nowadays, devices generate copious quantities of high-speed data streams due to Internet of Things (IoT) applications. For the most part, cloud computing platforms handle and manage all of these data and requests. However, for certain applications, the data transmission delay that comes with transferring data from edge devices to the cloud could be unbearable. When there are a lot of devices connected to the internet, the public network actually becomes a bottleneck for data transfer. In this setting, power management, data storage, resource management, and service management all necessitate more robust infrastructure and complex processes. More efficient use of network and cloud resources is achievable with fog computing's “intelligent gateway” capability. Methodology: Planning and managing resources is one of the most important factors affecting system performance (especially latency) in a fog-cloud environment. Planning in an environment with fog and clouds is an NP-hard problem. This paper delves into the optimisation difficulty of longevity for data-intensive job scheduling in fog and cloud-based IoT systems. The issue is initially expressed as an optimisation model for integer linear programming (ILP). Next, we provide a heuristic algorithm known as DLJSF (Data-Locality Aware Job Scheduling in Fog-Cloud) that is based on the suggested formulation. Results: The results of the tests showed that the performance of the proposed algorithm is close to the results by an average of 87 %. Also, on average, it is 99.16 % better than the LP results obtained from the optimal solution obtained from the solver obtained from the solution that the data is processed locally. To check the efficiency of the simulation solution, it was repeated for tasks with different entry rates and data with different sizes. Conclusion: According to the obtained documents, the data transfer approach can be valuable and the proposed algorithm has not lost its performance in different conditions
    corecore