426 research outputs found

    Intelligent Design for Real Time Networked Multi-Agent Systems

    Get PDF
    Past decade has witnessed an unprecedented growth in reasearch for Unmanned Aerial Vehicles (UAVs) both in military and nonmilitary fronts. They have become ubiquitous in almost every military operations which includes domestic and overseas missions. With rapidly advancing technology, open source nature of the flight controllers, and significantly lesser costs than before, companies around the world are delving into UAV market as one of the upcoming lucrative investments. Companies like Amazon Inc., Dominos Pizza Inc. have had some successful test runs which again solidifies the research opportunities. Delivery services and recreational uses seems to have increased in the past 3-4 years which has let the Federal Aviation Administration to update their rules and regulations. Mapping, Surveying and search/rescue mission are some of the applications of UAVs that are most appealing. Making these applications airborne cuts the time and cost at considerable and affordable levels. Using UAVs for operations has advantages in both response time and need of manpower compared to piloted aricrafts. Obtaining prior information of a person/people in distress can become a deciding factor for a successful mission. It can help in making critical decision as which location or type of helicopter / vehicle to be used for extraction, equipment to bring and how many crew members that are needed. The idea here is to make this system of UAVs automated to coordinate with each other without human intervention (other than high level commands like takeoff and land). Researchers and Military experts have recognized the use of drones for search and rescue missions to be of utmost importance. Year 2016 saw a first of its kind UAV search and rescue symposium held in Nevada. The objective was to give a platform for UAV enthusiasts and researchers and share their experiences and concerns while using UAVs as first responders. The biggest drawback of using an aerial vehicle for inspection/search/rescue mission is its airborne time. The batteries used are big and heavy which increases the weight and decreases the flight time. One can go about solving this issue by using a swarm of UAVs which would inspect/search a given area in less amount of time. This has advantage in both response time and need for lesser man power.The main challenges for Multiple Drone Control (MDC) includes 1) Address the periodic sampling frequency issue of information of assets so as to maintain stability; 2) Optimize the communication channel while providing minimum Quality of Service (QoS); 3) Optimal control strategy which includes non-linearity in state space model; 4) Optimal control in presence of uncertainties; 5) Admitting new agents for dynamic agents in the Networked Multi-Agent System (MAS) Scenario.This dissertation aims at building a hardware and a software platform for communication of multiple UAVs upon which additional control algorithms can be implementated. It starts with building a DJI S1000 octacopter from the ground up. The components used are specified in the following sections. The idea here is to make a drone that can autonomously travel to specified location with safety features like geofencing and land on emergency situations. The user has to provide the necessary commands like GPS locations and takeoff/land commands via a Radio Controller (RC) remote. At any point of the flight, the UAV should be able to receive new commands from the ground control stations (GCS). After successful implementation, the UAV would not be restricted to the range of RC remote. It would be able to travel greater distances given the GPS signal remains operational in the field. This is possible at a global scale with limitation of only the batteries and flight time

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    A review of connection admission control algorithms for ATM networks

    Get PDF
    The emergence of high-speed networks such as those with ATM integrates large numbers of services with a wide range of characteristics. Admission control is a prime instrument for controlling congestion in the network. As part of connection services to an ATM system, the Connection Admission Control (CAC) algorithm decides if another call or connection can be admitted to the Broadband Network. The main task of the CAC is to ensure that the broadband resources will not saturate or overflow within a very small probability. It limits the connections and guarantees Quality of Service for the new connection. The algorithm for connection admission is crucial in determining bandwidth utilisation efficiency. With statistical multiplexing more calls can be allocated on a network link, while still maintaining the Quality of Service specified by the connection with traffic parameters and type of service. A number of algorithms for admission control for Broadband Services with ATM Networks are described and compared for performance under different traffic loads. There is a general description of the ATM Network as an introduction. Issues to do with source distributions and traffic models are explored in Chapter 2. Chapter 3 provides an extensive presentation of the CAC algorithms for ATM Broadband Networks. The ideas about the Effective Bandwidth are reviewed in Chapter 4, and a different approach to admission control using online measurement is presented in Chapter 5. Chapter 6 has the numerical evaluation of four of the key algorithms, with simulations. Finally Chapter 7 has conclusions of the findings and explores some possibilities for further work

    Fuzzy-Logic Based Call Admission Control in 5G Cloud Radio Access Networks with Pre-emption

    Get PDF
    YesFifth generation (5G) cellular networks will be comprised of millions of connected devices like wearable devices, Androids, iPhones, tablets and the Internet of Things (IoT) with a plethora of applications generating requests to the network. The 5G cellular networks need to cope with such sky-rocketing tra c requests from these devices to avoid network congestion. As such, cloud radio access networks (C-RAN) has been considered as a paradigm shift for 5G in which requests from mobile devices are processed in the cloud with shared baseband processing. Despite call admission control (CAC) being one of radio resource management techniques to avoid the network congestion, it has recently been overlooked by the community. The CAC technique in 5G C-RAN has a direct impact on the quality of service (QoS) for individual connections and overall system e ciency. In this paper, a novel Fuzzy-Logic based CAC scheme with pre-emption in C-RAN is proposed. In this scheme, cloud bursting technique is proposed to be used during congestion, where some delay tolerant low-priority connections are pre-empted and outsourced to a public cloud with a penalty charge. Simulation results show that the proposed scheme has low blocking probability below 5%, high throughput, low energy consumption and up to 95% of return on revenue

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Modeling a domain in a tutorial-like system using learning automata

    Get PDF
    The aim of this paper is to present a novel approach to model a knowledge domain for teaching material in a Tutorial-like system. In this approach, the Tutorial-like system is capable of presenting teaching material within a Socratic model of teaching. The corresponding questions are of a multiple choice type, in which the complexity of the material increases in difficulty. This enables the Tutorial-like system to present the teaching material in different chapters, where each chapter represents a level of difficulty that is harder than the previous one. We attempt to achieve the entire learning process using the Learning Automata (LA) paradigm. In order for the Domain model to possess an increased difficulty for the teaching Environment, we propose to correspondingly reduce the range of the penalty probabilities of all actions by incorporating a scaling factor Ό. We show that such a scaling renders it more difficult for the Student to infer the correct action within the LA paradigm. To the best of our knowledge, the concept of modeling teaching material with increasing difficulty using a LA paradigm is unique. The main results we have obtained are that increasing the difficulty of the teaching material can affect the learning of Normal and Below-Normal Students by resulting in an increased learning time, but it seems to have no effect on the learning behavior of Fast Students. The proposed representation has been tested for different benchmark Environments, and the results show that the difficulty of the Environments can be increased by decreasing the range of the penalty probabilities. For example, for some Environments, decreasing the range of the penalty probabilities by 50% results in increasing the difficulty of learning for Normal Students by more than 60%

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Learning algorithms for the control of routing in integrated service communication networks

    Get PDF
    There is a high degree of uncertainty regarding the nature of traffic on future integrated service networks. This uncertainty motivates the use of adaptive resource allocation policies that can take advantage of the statistical fluctuations in the traffic demands. The adaptive control mechanisms must be 'lightweight', in terms of their overheads, and scale to potentially large networks with many traffic flows. Adaptive routing is one form of adaptive resource allocation, and this thesis considers the application of Stochastic Learning Automata (SLA) for distributed, lightweight adaptive routing in future integrated service communication networks. The thesis begins with a broad critical review of the use of Artificial Intelligence (AI) techniques applied to the control of communication networks. Detailed simulation models of integrated service networks are then constructed, and learning automata based routing is compared with traditional techniques on large scale networks. Learning automata are examined for the 'Quality-of-Service' (QoS) routing problem in realistic network topologies, where flows may be routed in the network subject to multiple QoS metrics, such as bandwidth and delay. It is found that learning automata based routing gives considerable blocking probability improvements over shortest path routing, despite only using local connectivity information and a simple probabilistic updating strategy. Furthermore, automata are considered for routing in more complex environments spanning issues such as multi-rate traffic, trunk reservation, routing over multiple domains, routing in high bandwidth-delay product networks and the use of learning automata as a background learning process. Automata are also examined for routing of both 'real-time' and 'non-real-time' traffics in an integrated traffic environment, where the non-real-time traffic has access to the bandwidth 'left over' by the real-time traffic. It is found that adopting learning automata for the routing of the real-time traffic may improve the performance to both real and non-real-time traffics under certain conditions. In addition, it is found that one set of learning automata may route both traffic types satisfactorily. Automata are considered for the routing of multicast connections in receiver-oriented, dynamic environments, where receivers may join and leave the multicast sessions dynamically. Automata are shown to be able to minimise the average delay or the total cost of the resulting trees using the appropriate feedback from the environment. Automata provide a distributed solution to the dynamic multicast problem, requiring purely local connectivity information and a simple updating strategy. Finally, automata are considered for the routing of multicast connections that require QoS guarantees, again in receiver-oriented dynamic environments. It is found that the distributed application of learning automata leads to considerably lower blocking probabilities than a shortest path tree approach, due to a combination of load balancing and minimum cost behaviour

    Reinforcement learning for resource allocation in LEO satellite networks

    No full text
    Published versio
    • 

    corecore