154 research outputs found

    New fault-tolerant routing algorithms for k-ary n-cube networks

    Get PDF
    The interconnection network is one of the most crucial components in a multicomputer as it greatly influences the overall system performance. Networks belonging to the family of k-ary n-cubes (e.g., tori and hypercubes) have been widely adopted in practical machines due to their desirable properties, including a low diameter, symmetry, regularity, and ability to exploit communication locality found in many real-world parallel applications. A routing algorithm specifies how a message selects a path to cross from source to destination, and has great impact on network performance. Routing in fault-free networks has been extensively studied in the past. As the network size scales up the probability of processor and link failure also increases. It is therefore essential to design fault-tolerant routing algorithms that allow messages to reach their destinations even in the presence of faulty components (links and nodes). Although many fault-tolerant routing algorithms have been proposed for common multicomputer networks, e.g. hypercubes and meshes, little research has been devoted to developing fault-tolerant routing for well-known versions of k-ary n-cubes, such as 2 and 3- dimensional tori. Previous work on fault-tolerant routing has focused on designing algorithms with strict conditions imposed on the number of faulty components (nodes and links) or their locations in the network. Most existing fault-tolerant routing algorithms have assumed that a node knows either only the status of its neighbours (such a model is called local-information-based) or the status of all nodes (global-information-based). The main challenge is to devise a simple and efficient way of representing limited global fault information that allows optimal or near-optimal fault-tolerant routing. This thesis proposes two new limited-global-information-based fault-tolerant routing algorithms for k-ary n-cubes, namely the unsafety vectors and probability vectors algorithms. While the first algorithm uses a deterministic approach, which has been widely employed by other existing algorithms, the second algorithm is the first that uses probability-based fault- tolerant routing. These two algorithms have two important advantages over those already existing in the relevant literature. Both algorithms ensure fault-tolerance under relaxed assumptions, regarding the number of faulty components and their locations in the network. Furthermore, the new algorithms are more general in that they can easily be adapted to different topologies, including those that belong to the family of k-ary n-cubes (e.g. tori and hypercubes) and those that do not (e.g., generalised hypercubes and meshes). Since very little work has considered fault-tolerant routing in k-ary n-cubes, this study compares the relative performance merits of the two proposed algorithms, the unsafety and probability vectors, on these networks. The results reveal that for practical number of faulty nodes, both algorithms achieve good performance levels. However, the probability vectors algorithm has the advantage of being simpler to implement. Since previous research has focused mostly on the hypercube, this study adapts the new algorithms to the hypercube in order to conduct a comparative study against the recently proposed safety vectors algorithm. Results from extensive simulation experiments demonstrate that our algorithms exhibit superior performance in terms of reachability (chances of a message reaching its destination), deviation from optimality (average difference between minimum distance and actual routing distance), and looping (chances of a message continuously looping in the network without reaching destination) to the safety vectors

    Time series prediction using supervised learning and tools from chaos theory

    Get PDF
    A thesis submitted to the Faculty of Science and Computing, University of Luton, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn this work methods for performing time series prediction on complex real world time series are examined. In particular series exhibiting non-linear or chaotic behaviour are selected for analysis. A range of methodologies based on Takens' embedding theorem are considered and compared with more conventional methods. A novel combination of methods for determining the optimal embedding parameters are employed and tried out with multivariate financial time series data and with a complex series derived from an experiment in biotechnology. The results show that this combination of techniques provide accurate results while improving dramatically the time required to produce predictions and analyses, and eliminating a range of parameters that had hitherto been fixed empirically. The architecture and methodology of the prediction software developed is described along with design decisions and their justification. Sensitivity analyses are employed to justify the use of this combination of methods, and comparisons are made with more conventional predictive techniques and trivial predictors showing the superiority of the results generated by the work detailed in this thesis

    Literature Review For Networking And Communication Technology

    Get PDF
    Report documents the results of a literature search performed in the area of networking and communication technology

    Joint University Program for Air Transportation Research, 1989-1990

    Get PDF
    Research conducted during the academic year 1989-90 under the NASA/FAA sponsored Joint University Program for Air Transportation research is discussed. Completed works, status reports and annotated bibliographies are presented for research topics, which include navigation, guidance and control theory and practice, aircraft performance, human factors, and expert systems concepts applied to airport operations. An overview of the year's activities for each university is also presented

    State-of-the-art Assessment For Simulated Forces

    Get PDF
    Summary of the review of the state of the art in simulated forces conducted to support the research objectives of Research and Development for Intelligent Simulated Forces

    Parallel algorithms for three dimensional electrical impedance tomography

    Get PDF
    This thesis is concerned with Electrical Impedance Tomography (EIT), an imaging technique in which pictures of the electrical impedance within a volume are formed from current and voltage measurements made on the surface of the volume. The focus of the thesis is the mathematical and numerical aspects of reconstructing the impedance image from the measured data (the reconstruction problem). The reconstruction problem is mathematically difficult and most reconstruction algorithms are computationally intensive. Many of the potential applications of EIT in medical diagnosis and industrial process control depend upon rapid reconstruction of images. The aim of this investigation is to find algorithms and numerical techniques that lead to fast reconstruction while respecting the real mathematical difficulties involved. A general framework for Newton based reconstruction algorithms is developed which describes a large number of the reconstruction algorithms used by other investigators. Optimal experiments are defined in terms of current drive and voltage measurement patterns and it is shown that adaptive current reconstruction algorithms are a special case of their use. This leads to a new reconstruction algorithm using optimal experiments which is considerably faster than other methods of the Newton type. A tomograph is tested to measure the magnitude of the major sources of error in the data used for image reconstruction. An investigation into the numerical stability of reconstruction algorithms identifies the resulting uncertainty in the impedance image. A new data collection strategy and a numerical forward model are developed which minimise the effects of, previously, major sources of error. A reconstruction program is written for a range of Multiple Instruction Multiple Data, (MIMD), distributed memory, parallel computers. These machines promise high computational power for low cost and so look promising as components in medical tomographs. The performance of several reconstruction algorithms on these computers is analysed in detail
    • …
    corecore