4 research outputs found
A Neural Radiance Field-Based Architecture for Intelligent Multilayered View Synthesis
A mobile ad hoc network is made up of a number of wireless portable nodes that spontaneously come together en route for establish a transitory network with no need for any central management. A mobile ad hoc network (MANET) is made up of a sizable and reasonably dense community of mobile nodes that travel across any terrain and rely solely on wireless interfaces for communication, not on any well before centralized management. Furthermore, routing be supposed to offer a method for instantly delivering data across a network between any two nodes. Finding the best packet routing from across infrastructure is the major issue, though. The proposed protocol's major goal is to identify the least-expensive nominal capacity acquisition that assures the transportation of realistic transport that ensures its durability in the event of any node failure. This study suggests the Optimized Route Selection via Red Imported Fire Ants (RIFA) Strategy as a way to improve on-demand source routing systems. Predicting Route Failure and energy Utilization is used to pick the path during the routing phase. Proposed work assess the results of the comparisons based on performance parameters like as energy usage, packet delivery rate (PDR), and end-to-end (E2E) delay. The outcome demonstrates that the proposed strategy is preferable and increases network lifetime while lowering node energy consumption and typical E2E delay under the majority of network performance measures and factors
RGIM: An Integrated Approach to Improve QoS in AODV, DSR and DSDV Routing Protocols for FANETS Using the Chain Mobility Model
Flying ad hoc networks (FANETs) are a collection of unmanned aerial vehicles that communicate without any predefined infrastructure. FANET, being one of the most researched topics nowadays, finds its scope in many complex applications like drones used for military applications, border surveillance systems and other systems like civil applications in traffic monitoring and disaster management. Quality of service (QoS) performance parameters for routing e.g. delay, packet delivery ratio, jitter and throughput in FANETs are quite difficult to improve. Mobility models play an important role in evaluating the performance of the routing protocols. In this paper, the integration of two selected mobility models, i.e. random waypoint and Gauss–Markov model, is implemented. As a result, the random Gauss integrated model is proposed for evaluating the performance of AODV (ad hoc on-demand distance vector), DSR (dynamic source routing) and DSDV (destination-Sequenced distance vector) routing protocols. The simulation is done with an NS2 simulator for various scenarios by varying the number of nodes and taking low- and high-node speeds of 50 and 500, respectively. The experimental results show that the proposed model improves the QoS performance parameters of AODV, DSR and DSDV protocol
Lotus effect optimization algorithm (LEA): a lotus nature-inspired algorithm for engineering design optimization
Here we introduce a new evolutionary algorithm called the Lotus Effect Algorithm, which combines efficient operators from the dragonfly algorithm, such as the movement of dragonflies in flower pollination for exploration, with the self-cleaning feature of water on flower leaves known as the lotus effect, for extraction and local search operations. The authors compared this method to other improved versions of the dragonfly algorithm using standard benchmark functions, and it outperformed all other methods according to Fredman\u27s test on 29 benchmark functions. The article also highlights the practical application of LEA in reducing energy consumption in IoT nodes through clustering, resulting in increased packet delivery ratio and network lifetime. Additionally, the performance of the proposed method was tested on real-world problems with multiple constraints, such as the welded beam design optimization problem and the speed-reducer problem applied in a gearbox, and the results showed that LEA performs better than other methods in terms of accuracy
Recommended from our members
Neural network design for intelligent mobile network optimisation
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe mobile networks users’ demands for data services are increasing exponentially, this is due to two main factors: the first is the evolution of smart phones and their application, and the second is the emerging new technologies for internet of things, smart cities…etc, which keeps pumping more data into the network; ‘though most of the data routed in the current mobile network is non-live data’. This increasing of demands arise the necessity for the mobile network operators to keep improving their network to satisfy it, this improvement takes place via adding hardware or increasing the resources or a combination of both. The radio resources are strictly limited due to spectrum licensing and availability, therefore efficient spectrum utilization is a major goal to be achieved for both network operators and developers. Simultaneous and multiple channel access,and adding more cells to the network are ways used to increase the data exchanged between the network nodes. The current 4G mobile system is based on the Orthogonal Frequency Division Multiple Access (OFDMA) for accessing the medium and the intercell interference degrades the link quality at the cell edge, with the introduction of heterogeneity concept to the LTE in Release 10 of the 3GPP the handover process became even more complex. To mitigate the intercell interference at the cell edge, coordinated multipoint and carrier aggregation techniques are utilized for dual connectivity. This work is focused on designing and proposing enhancing features to improve network performance and sustainability, these features comprises of distributing small cells for data only transmission, handover schemes performance evaluation at cell edge with dual connectivity, and Artificial Intelligence technology for balancing and prediction. In the proposed model design the data and controls of the Small eNodeB (SeNodeB) are processed at the network edge using a Mobile Edge Computing (MEC) server and the SeNodeBs are used to boost services provided to the users, also the concept of caching data has been investigated, the caching units where implemented in different network levels. The proposed system and resource management are simulated using the OPNET modeller and evaluated through multiple scenarios with and without full load, the UE is reconfigured to accommodate dual connectivity and have two separate connections for uplink and downlink, while maintaining connection to the Macro cell via uplink, the downlink is dedicated for small cells when content is requested from the cache. The results clearly show that the proposed system can decrease the latency while the total throughput delivered by the network has highly improved when SeNodeBs are deployed in the system, rising throughput will incur the rise of overall capacity which leads to better services being provided to the users or more users to join and benefit from the network. Handover improvement is also considered in this work, with the help of two Artificial Intelligence (AI) entities better handover performance are achieved. Balanced load over the SeNodeBs results in less frequent handover, the proposed load balancer is based on artificial neural network clustering model with self-organizing map as a hidden layer, it’s trained to forecast the network condition and learn to reduce the number of handovers especially for the UEs at the cell edge by performing only necessary ones, and avoid handovers to the Macro cell for the downlink direction. The examined handovers concern the downlinks when routing non live video stored at the small cell’s cache, and a reduction in the frequent handovers was achieved when running the balancer. Keep revolving in the handover orbit, another way to preserve and utilize network resources is by predicting the handovers before they occur, and allocate the required data in the target SeNodeB, the predictor entity in the proposed system architecture combines the features of Radial Basis Function Neural Network and neural network time series tool to create and update prediction list from the system’s collected data and learn to predict the next SeNodeB to associate with. The prediction entity is simulated using MATLAB, and the results shows that the system was able to deliver up to 92% correct predictions for handovers which led to overall throughput improvement of 75%