2,500 research outputs found

    AN INVESTIGATION INTO AN EXPERT SYSTEM FOR TELECOMMUNICATION NETWORK DESIGN

    Get PDF
    Many telephone companies, especially in Eastern-Europe and the 'third world', are developing new telephone networks. In such situations the network design engineer needs computer based tools that not only supplement his own knowledge but also help him to cope with situations where not all the information necessary for the design is available. Often traditional network design tools are somewhat removed from the practical world for which they were developed. They often ignore the significant uncertain and statistical nature of the input data. They use data taken from a fixed point in time to solve a time variable problem, and the cost formulae tend to be on an average per line or port rather than the specific case. Indeed, data is often not available or just plainly unreliable. The engineer has to rely on rules of thumb honed over many years of experience in designing networks and be able to cope with missing data. The complexity of telecommunication networks and the rarity of specialists in this area often makes the network design process very difficult for a company. It is therefore an important area for the application of expert systems. Designs resulting from the use of expert systems will have a measure of uncertainty in their solution and adequate account must be made of the risk involved in implementing its design recommendations. The thesis reviews the status of expert systems as used for telecommunication network design. It further shows that such an expert system needs to reduce a large network problem into its component parts, use different modules to solve them and then combine these results to create a total solution. It shows how the various sub-division problems are integrated to solve the general network design problem. This thesis further presents details of such an expert system and the databases necessary for network design: three new algorithms are invented for traffic analysis, node locations and network design and these produce results that have close correlation with designs taken from BT Consultancy archives. It was initially supposed that an efficient combination of existing techniques for dealing with uncertainty within expert systems would suffice for the basis of the new system. It soon became apparent, however, that to allow for the differing attributes of facts, rules and data and the varying degrees of importance or rank within each area, a new and radically different method would be needed. Having investigated the existing uncertainty problem it is believed that a new more rational method has been found. The work has involved the invention of the 'Uncertainty Window' technique and its testing on various aspects of network design, including demand forecast, network dimensioning, node and link system sizing, etc. using a selection of networks that have been designed by BT Consultancy staff. From the results of the analysis, modifications to the technique have been incorporated with the aim of optimising the heuristics and procedures, so that the structure gives an accurate solution as early as possible. The essence of the process is one of associating the uncertainty windows with their relevant rules, data and facts, which results in providing the network designer with an insight into the uncertainties that have helped produce the overall system design: it indicates which sources of uncertainty and which assumptions are were critical for further investigation to improve upon the confidence of the overall design. The windowing technique works by virtue of its ability to retain the composition of the uncertainty and its associated values, assumption, etc. and allows for better solutions to be attained.BRITISH TELECOMMUNICATIONS PL

    A review on economic and technical operation of active distribution systems

    Full text link
    © 2019 Elsevier Ltd Along with the advent of restructuring in power systems, considerable integration of renewable energy resources has motivated the transition of traditional distribution networks (DNs) toward new active ones. In the meanwhile, rapid technology advances have provided great potentials for future bulk utilization of generation units as well as the energy storage (ES) systems in the distribution section. This paper aims to present a comprehensive review of recent advancements in the operation of active distribution systems (ADSs) from the viewpoint of operational time-hierarchy. To be more specific, this time-hierarchy consists of two stages, and at the first stage of this time-hierarchy, four major economic factors, by which the operation of traditional passive DNs is evolved to new active DNs, are described. Then the second stage of the time-hierarchy refers to technical management and power quality correction of ADSs in terms of static, dynamic and transient periods. In the end, some required modeling and control developments for the optimal operation of ADSs are discussed. As opposed to previous review papers, potential applications of devices in the ADS are investigated considering their operational time-intervals. Since some of the compensating devices, storage units and generating sources may have different applications regarding the time scale of their utilization, this paper considers real scenario system operations in which components of the network are firstly scheduled for the specified period ahead; then their deviations of operating status from reference points are modified during three time-intervals covering static, dynamic and transient periods

    The Random Forest Algorithm with Application to Multispectral Image Analysis

    Get PDF
    The need for computers to make educated decisions is growing. Various methods have been developed for decision making using observation vectors. Among these are supervised and unsupervised classifiers. Recently, there has been increased attention to ensemble learning--methods that generate many classifiers and aggregate their results. Breiman (2001) proposed Random Forests for classification and clustering. The Random Forest algorithm is ensemble learning using the decision tree principle. Input vectors are used to grow decision trees and build a forest. A classification decision is reached by sending an unknown input vector down each tree in the forest and taking the majority vote among all trees. The main focus of this research is to evaluate the effectiveness of Random Forest in classifying pixels in multispectral image data acquired using satellites. In this paper the effectiveness and accuracy of Random Forest, neural networks, support vector machines, and nearest neighbor classifiers are assessed by classifying multispectral images and comparing each classifier\u27s results. As unsupervised classifiers are also widely used, this research compares the accuracy of an unsupervised Random Forest classifier with the Mahalanobis distance classifier, maximum likelihood classifier, and minimum distance classifier with respect to multispectral satellite data

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    Circumventing the fuzzy type reduction for autonomous vehicle controller

    Get PDF
    Fuzzy type-2 controllers can easily deal with systems nonlinearity and utilise humans’ expertise to solve many complex control problems; they are also very good at processing uncertainty, which exists in many robotic systems, such as autonomous vehicles. However, their computational cost is high, especially at the type reduction stage. In this research, it is aimed to reduce the computation cost of the type reduction stage, thus to facilitate faster performance speed and increase the number of actions able to be operated in one microprocessor. Proposed here are adaptive integration principles with a binary successive search technique to locate the straight or semi-straight segments of a fuzzy set, thus to use them in achieving faster weighted average computation. This computation is very important because it runs frequently in many type reductions. A variable adaptation rate is suggested during the type reduction iterations to reduce the computation cost further. The influence of the proposed approaches on the fuzzy type-2 controller’s error has been mathematically analysed and then experimentally measured using a wall-following behaviour, which is the most important action for many autonomous vehicles. The resultant execution time-gain of the proposed technique has reached to 200%. This evaluated with respect to the execution time of the original, unmodified, type reduction procedure. This study develops a new accelerated version of the enhanced Karnik-Mendel type reducer by using better initialisations and better indexing scheme. The resulting performance time-gain reached 170%, with respect to the original version. A further cut in the type reduction time is achieved by proposing a One-Go type reduction procedure. This technique can reduce multiple sets altogether in one pass, thus eliminating much of the redundant calculations needed to carry out the reduction individually. All the proposed type reduction enhancements were evaluated in terms of their execution time-gain and performance error using every possible fuzzy firing level combination. Tests were then performed using a real autonomous vehicle, navigates in a relatively complex arena field with acute, right, obtuse, and reflex angled corners, to assure evaluating wide variety of operation conditions. A simplified state hold technique using Schmitt-trigger principles and dynamic sense pattern control was suggested and implemented to assure small rule base size and to obtain more accurate evaluation of the type reduction stages
    corecore