76,006 research outputs found
Neural network based adaptive PID controller of nonlinear heat exchanger
This research presents the design and simulation of nonlinear adaptive control system on the heating process of shell-and-tube heat exchanger model BDT921. Shell-and-tube heat exchanger is a nonlinear process and change in process dynamics cause instability of the PID controller parameters i.e proportional gain, integral time and derivative time. Thus, the PID controller parameters need to be repeatedly retuned. In this study, neural network approach was introduced to auto-tune the controller parameters. The dynamic data from the BDT921 plant was collected to formulate the mathematical model of the process using MATLAB System Identification Toolbox. NARX model was used to represent the heat exchanger. Neural network was used as adaptive system to the PID controller. The neural network model consists of 4 input variables and 4 output variables. Single hidden layer feed forward neural networks with 20 neurons in hidden layer is the optimum topology of the network. The effectiveness of the controller was evaluated based on the set point tracking only. Simulation result proved that the adaptive PID controller was more effective in tracking the set point with faster settling time and lower or no overshoot respond compared to conventional PID controller
Bayesian topology identification of linear dynamic networks
In networks of dynamic systems, one challenge is to identify the
interconnection structure on the basis of measured signals. Inspired by a
Bayesian approach in [1], in this paper, we explore a Bayesian model selection
method for identifying the connectivity of networks of transfer functions,
without the need to estimate the dynamics. The algorithm employs a Bayesian
measure and a forward-backward search algorithm. To obtain the Bayesian
measure, the impulse responses of network modules are modeled as Gaussian
processes and the hyperparameters are estimated by marginal likelihood
maximization using the expectation-maximization algorithm. Numerical results
demonstrate the effectiveness of this method
Cross-Layer Peer-to-Peer Track Identification and Optimization Based on Active Networking
P2P applications appear to emerge as ultimate killer applications due to their ability to construct highly dynamic overlay topologies with rapidly-varying and unpredictable traffic dynamics, which can constitute a serious challenge even for significantly over-provisioned IP networks. As a result, ISPs are facing new, severe network management problems that are not guaranteed to be addressed by statically deployed network engineering mechanisms. As a first step to a more complete solution to these problems, this paper proposes a P2P measurement, identification and optimisation architecture, designed to cope with the dynamicity and unpredictability of existing, well-known and future, unknown P2P systems. The purpose of this architecture is to provide to the ISPs an effective and scalable approach to control and optimise the traffic produced by P2P applications in their networks. This can be achieved through a combination of different application and network-level programmable techniques, leading to a crosslayer identification and optimisation process. These techniques can be applied using Active Networking platforms, which are able to quickly and easily deploy architectural components on demand. This flexibility of the optimisation architecture is essential to address the rapid development of new P2P protocols and the variation of known protocols
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …