311 research outputs found

    Acta Cybernetica : Volume 10. Number 4.

    Get PDF

    Addressing traffic congestion and throughput through optimization.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Traffic congestion experienced in port precincts have become prevalent in recent years for South Africa and internationally [1, 2, 3]. In addition to the environmental impacts of air pollution due to this challenge, economic effects also weigh heavy on profit margins with added fuel costs and time wastages. Even though there are many common factors attributing to congestion experienced in port precincts and other areas, operational inefficiencies due to slow productivity and lack of handling equipment to service trucks in port areas are a major contributor [4, 5]. While there are several types of optimisation approaches to addressing traffic congestion such as Queuing Theory [6], Genetic Algorithms [7], Ant Colony Optimisation [8], Particle Swarm Optimisation [9], traffic congestion is modelled based on congested queues making queuing theory most suited for resolving this problem. Queuing theory is a discipline of optimisation that studies the dynamics of queues to determine a more optimal route to reduce waiting times. The use of optimisation to address the root cause of port traffic congestion has been lacking with several studies focused on specific traffic zones that only address the symptoms. In addition, research into traffic around port precincts have also been limited to the road side with proposed solutions focusing on scheduling and appointment systems [25, 56] or the sea-side focusing on managing vessel traffic congestion [30, 31, 58]. The aim of this dissertation is to close this gap through the novel design and development of Caudus, a smart queue solution that addresses traffic congestion and throughput through optimization. The name “CAUDUS” is derived as an anagram with Latin origins to mean “remove truck congestion”. Caudus has three objective functions to address congestion in the port precinct, and by extension, congestion in warehousing and freight logistics environments viz. Preventive, Reactive and Predictive. The preventive objective function employs the use of Little’s rule [14] to derive the algorithm for preventing congestion. Acknowledging that congestion is not always avoidable, the reactive objective function addresses the problem by leveraging Caudus’ integration capability with Intelligent Transport Systems [65] in conjunction with other road-user network solutions. The predictive objective function is aimed at ensuring the environment is incident free and provides an early-warning detection of possible exceptions in traffic situations that may lead to congestion. This is achieved using the derived algorithms from this study that identifies bottleneck symptoms in one traffic zone where the root cause exists in an adjoining traffic area. The Caudus Simulation was developed in this study to test the derived algorithms against the different congestion scenarios. The simulation utilises HTML5 and JavaScript in the front-end GUI with the back-end having a SQL code base. The entire simulation process is triggered using a series of multi-threaded batch programs to mimic the real-world by ensuring process independence for the various simulation activities. The results from the simulation demonstrates a significant reduction in the vii duration of congestion experienced in the port precinct. It also displays a reduction in throughput time of the trucks serviced at the port thus demonstrating Caudus’ novel contribution in addressing traffic congestion and throughput through optimisation. These results were also published and presented at the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD 2021) under the title “CAUDUS: An Optimisation Model to Reducing Port Traffic Congestion” [84]

    The use of microcomputers in the training of deck officers

    Get PDF
    The changes in the maritime industry have led to major adjustments in the training of seafarers in general and deck officers in particular. There are innovations not only in the training programs, but also in the means to achieve them. The maritime community is seeking ways to categorize the high cost simulators in order to make their training use compulsory. Obviously, not every maritime college will be able to take advantage of this valuable training tool. This paper advocates the use of microcomputers in the training of deck officers, as a possible alternative to the costly simulators. It investigates the different methodologies that may be used by computer-assisted Instruction (CAI). It gives examples and illustrations of possible use of CAI in addressing subjects such as collision avoidance and use of radar that are important for deck officer training. Moreover, it looks into some existing instructional software and some application programs and highlights their specific training features in different discipline areas of deck officer training. The delicate question of program evaluation has also been given some attention in line with the hardware prerequisites and the academic aspects of the problem. The author gives some insights into the training potentials of a cargo handling program entitled Mariner. He shows how this PC-based program may be used to teach ship stability and cargo handling. In conclusion, the paper suggests some changes in the approach of the IMO model courses and recommends guidelines for the implementation of CAI in the ARSTM1 of Abidjan

    Modelling and performability evaluation of Wireless Sensor Networks

    Get PDF
    This thesis presents generic analytical models of homogeneous clustered Wireless Sensor Networks (WSNs) with a centrally located Cluster Head (CH) coordinating cluster communication with the sink directly or through other intermediate nodes. The focus is to integrate performance and availability studies of WSNs in the presence of sensor nodes and channel failures and repair/replacement. The main purpose is to enhance improvement of WSN Quality of Service (QoS). Other research works also considered in this thesis include modelling of packet arrival distribution at the CH and intermediate nodes, and modelling of energy consumption at the sensor nodes. An investigation and critical analysis of wireless sensor network architectures, energy conservation techniques and QoS requirements are performed in order to improve performance and availability of the network. Existing techniques used for performance evaluation of single and multi-server systems with several operative states are investigated and analysed in details. To begin with, existing approaches for independent (pure) performance modelling are critically analysed with highlights on merits and drawbacks. Similarly, pure availability modelling approaches are also analysed. Considering that pure performance models tend to be too optimistic and pure availability models are too conservative, performability, which is the integration of performance and availability studies is used for the evaluation of the WSN models developed in this study. Two-dimensional Markov state space representations of the systems are used for performability modelling. Following critical analysis of the existing solution techniques, spectral expansion method and system of simultaneous linear equations are developed and used to solving the proposed models. To validate the results obtained with the two techniques, a discrete event simulation tool is explored. In this research, open queuing networks are used to model the behaviour of the CH when subjected to streams of traffic from cluster nodes in addition to dynamics of operating in the various states. The research begins with a model of a CH with an infinite queue capacity subject to failures and repair/replacement. The model is developed progressively to consider bounded queue capacity systems, channel failures and sleep scheduling mechanisms for performability evaluation of WSNs. Using the developed models, various performance measures of the considered system including mean queue length, throughput, response time and blocking probability are evaluated. Finally, energy models considering mean power consumption in each of the possible operative states is developed. The resulting models are in turn employed for the evaluation of energy saving for the proposed case study model. Numerical solutions and discussions are presented for all the queuing models developed. Simulation is also performed in order to validate the accuracy of the results obtained. In order to address issues of performance and availability of WSNs, current research present independent performance and availability studies. The concerns resulting from such studies have therefore remained unresolved over the years hence persistence poor system performance. The novelty of this research is a proposed integrated performance and availability modelling approach for WSNs meant to address challenges of independent studies. In addition, a novel methodology for modelling and evaluation of power consumption is also offered. Proposed model results provide remarkable improvement on system performance and availability in addition to providing tools for further optimisation studies. A significant power saving is also observed from the proposed model results. In order to improve QoS for WSN, it is possible to improve the proposed models by incorporating priority queuing in a mixed traffic environment. A model of multi-server system is also appropriate for addressing traffic routing. It is also possible to extend the proposed energy model to consider other sleep scheduling mechanisms other than On-demand proposed herein. Analysis and classification of possible arrival distribution of WSN packets for various application environments would be a great idea for enabling robust scientific research

    Development of a standard framework for manufacturing simulators

    Get PDF
    Discrete event simulation is now a well established modelling and experimental technique for the analysis of manufacturing systems. Since it was first employed as a technique, much of the research and commercial developments in the field have been concerned with improving the considerable task of model specification in order to improve productivity and reduce the level of modelling and programming expertise required. The main areas of research have been the development of modelling structures to bring modularity in program development, incorporating such structures in simulation software systems which would alleviate some of the programming burden, and the use of automatic programming systems to develop interfaces that would raise the model specification to a higher level of abstraction. A more recent development in the field has been the advent of a new generation of software, often referred to as manufacturing simulators, which have incorporated extensive manufacturing system domain knowledge in the model specification interface. Many manufacturing simulators are now commercially available, but their development has not been based on any common standard. This is evident in the differences that exist between their interfaces, internal data representation methods and modelling capabilities. The lack of a standard makes it impossible to reuse any part of a model when a user finds it necessary to move from one simulator to another. In such cases, not only a new modelling language has to be learnt but also the complete model has to be developed again requiring considerable time and effort. The motivation for the research was the need for the development of a standard that is necessary to improve reusability of models and is the first step towards interchangability of such models. A standard framework for manufacturing simulators has been developed. It consists of a data model that is independent of any simulator, and a translation module for converting model specification data into the internal data representation of manufacturing simulators; the translators are application specific, but the methodology is common and illustrated for three popular simulators. The data model provides for a minimum common model data specification which is based on an extensive analysis of existing simulators. It uses dialogues for interface and the frame knowledge representation method for modular storage of data. The translation methodology uses production rules for data mapping

    Extended Abstracts: PMCCS3: Third International Workshop on Performability Modeling of Computer and Communication Systems

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryThe pages of the front matter that are missing from the PDF were blank

    Modeling the Use of an Airborne Platform for Cellular Communications Following Disruptions

    Get PDF
    In the wake of a disaster, infrastructure can be severely damaged, hampering telecommunications. An Airborne Communications Network (ACN) allows for rapid and accurate information exchange that is essential for the disaster response period. Access to information for survivors is the start of returning to self-sufficiency, regaining dignity, and maintaining hope. Real-world testing has proven that such a system can be built, leading to possible future expansion of features and functionality of an emergency communications system. Currently, there are no airborne civilian communications systems designed to meet the demands of the public following a natural disaster. A system allowing even a limited amount of communications post-disaster is a great improvement on the current situation, where telecommunications are frequently not available. It is technically feasible to use an airborne, wireless, cellular system quickly deployable to disaster areas and configured to restore some of the functions of damaged terrestrial telecommunications networks. The system requirements were presented, leading to the next stage of the planned research, where a range of possible solutions were examined. The best solution was selected based on the earlier, predefined criteria. The system was modeled, and a test ii system built. The system was tested and redesigned when necessary, to meet the requirements. The research has shown how the combination of technology, especially the recent miniaturizations and move to open source software for cellular network components can allow sophisticated cellular networks to be implemented. The ACN system proposed could enable connectivity and reduce the communications problems that were experienced following Hurricane Sandy and Katrina. Experience with both natural and man-made disasters highlights the fact that communications are useful only to the extent that they are accessible and useable by the population

    The interaction network : a performance measurement and evaluation tool for loosely-coupled distributed systems

    Get PDF
    Much of today's computing is done on loosely-coupled distributed systems. Performance issues for such systems usually involve interactive performance, that is, system responsiveness as perceived by the user. The goal of the work described in this thesis has been to develop and implement tools and techniques for the measurement and evaluation of interactive performance in loosely-coupled distributed systems. The author has developed the concept of the interaction network, an acyclic directed graph designed to represent the processing performed by a distributed system in response to a user input. The definition of an interaction network is based on a general model of a loosely-coupled distributed system and a general model of user interactions. The author shows that his distributed system model is a valid abstraction for a wide range of present-day systems. Performance monitors for traditional time-sharing systems reported performance information, such as overall resource utilisations and queue lengths, for the system as a whole. Performance problems are now much more difficult, because systems are much more complex. Recent monitors designed specifically for distributed systems have tended to present performance information for execution of a distributed program, for example the time spent in each of a program's procedures. In the work described in this thesis, performance information is reported for one or more user interactions, where a user interaction is defined to be a single user input and all of the processing performed by the system on receiving that input. A user interaction is seen as quite different from a program execution; a user interaction includes the partial or total execution of one or more programs, and a program execution performs work as part of one or more user interactions. Several methods are then developed to show how performance information can be obtained from analysis of interaction networks. One valuable type of performance information is a decomposition of response time into times spent in each of some set of states, where each state might be defined in terms of the hardware and software resources used. Other performance information can be found from displays of interaction networks. The critical path through an interaction network is then defined as showing the set of activities such that at least one must be reduced in length if the response time of the interaction is to be reduced; the critical path is used in both response time decompositions and in displays of interaction networks. It was thought essential to demonstrate that interaction networks could be recorded for a working operating system. INMON, a prototype monitor based on the interaction network concept, has been constructed to operate in the SunOS environment. INMON consists of data collection and data analysis components. The data collection component, for example, involved the adding of 53 probes to the SunOS operating system kernel. To record interaction networks, a high-resolution global timebase is needed. A clock synchronisation program has been written to provide INMON with such a timebase. It is suggested that the method incorporates a number of improvements over other clock synchronisation methods. Several experiments have been performed to show that INMON can produce very detailed performance information for both individual user interactions and groups of user interactions, with user input being made through either character-based or graphical interfaces. The main conclusion reached in this thesis is that representing the processing component of a user interaction in an interaction network is a very valuable way of approaching the problem of measuring interactive performance in a loosely-coupled distributed system. An interaction network contains a very detailed record of the execution of an interaction and, from this record, a great deal of performance (and other) information can be derived. Construction of INMON has demonstrated that interaction networks can be identified, recorded, and analysed
    • 

    corecore