528 research outputs found

    Parallel computation on sparse networks of processors

    Get PDF
    SIGLELD:D48226/84 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Randomised shuffle and applied misinformation: An enhanced model for contact-based smart-card serial data transfer

    Get PDF
    Contact-based smart-cards, which comply to the International Standard IS0-7816, communicate with their associated read/write machines via a single bi-directional serial link. This link is easy to monitor with inexpensive equipment and resources, enabling captured data to be removed for later examination. In many contact-based smart-cards the logical abilities are provided by eight-bit microcontroller units (MCU) which are slow at performing effective cryptographic functions. Consequently, for expediency, much data may be transferred in plain-text across the vulnerable communications link, further easing an eavesdropper\u27s task. Practitioners in military communications protect transmitted information by varying a link\u27s carrier frequency in an apparently random sequence that is shared secretly between the sender and the authorised receiver. These multiplexing techniques, known as frequency or channel-hopping, serve to increase the task complexity for and/or confuse potential eavesdroppers. The study seeks to ascertain the applicability and value of protection provided by channel-hopping techniques, when realised with minimal additional overhead of microcontroller resources to the contact-based smart-card communications link. The apparent randomised shuffling of data transferred by these techniques has the potential benefit of deterring those observers who may lack the equipment and expertise to capture and decode the communicated message

    Analyzing Traffic and Multicast Switch Issues in an ATM Network.

    Get PDF
    This dissertation attempts to solve two problems related to an ATM network. First, we consider packetized voice and video sources as the incoming traffic to an ATM multiplexer and propose modeling methods for both individual and aggregated traffic sources. These methods are, then, used to analyze performance parameters such as buffer occupancy, cell loss probability, and cell delay. Results, thus obtained, for different buffer sizes and number of voice and video sources are analyzed and compared with those generated from existing techniques. Second, we study the priority handling feature for time critical services in an ATM multicast switch. For this, we propose a non-blocking copy network and priority handling algorithms. We, then, analyze the copy network using an analytical method and simulation. The analysis utilizes both priority and non-priority cells for two different output reservation schemes. The performance parameters, based on cell delay, delay jitter, and cell loss probability, are studied for different buffer sizes and fan-outs under various input traffic loads. Our results show that the proposed copy network provides a better performance for the priority cells while the performance for the non-priority cells is slightly inferior in comparison with the scenario when the network does not consider priority handling. We also study the fault-tolerant behavior of the copy network, specially for the broadcast banyan network subsection, and present a routing scheme considering the non-blocking property under a specific pattern of connection assignments. A fault tolerant characteristic can be quantified using the full access probability. The computation of the full access probability for a general network is known to be NP-hard. We, therefore, provide a new bounding technique utilizing the concept of minimal cuts to compute full access probability of the copy network. Our study for the fault-tolerant multi-stage interconnection network having either an extra stage or chaining shows that the proposed technique provides tighter bounds as compared to those given by existing approaches. We also apply our bounding method to compute full access probability of the fault-tolerant copy network

    Formal Generation of Executable Assertions for Application-Oriented Fault Tolerance

    Get PDF
    Executable assertions embedded into a distributed computing system can provide run-time assurance by ensuring that the program state, in the actual run-time environment, is consistent with the logical stage specified in the assertions; if not, then an error has occurred and a reliable communication of this diagnostic information is provided to the system such that reconfiguration and recovery can take place. Application- oriented fault tolerance is a method that provides fault detection using executable assertions based on the natural constraints of the application. This paper focuses on giving application-oriented fault tolerance a theoretical foundation by providing a mathematical model for the generation of executable assertions which detect faults in the presence of arbitrary failures. The mathematical model of choice was axiomatic program verification. A method was developed that translates a concurrent verification proof outline into an error-detecting concurrent program. This paper shows the application of the developed method to several applications

    Double Loop Interconnection Networks With Minimal Transmission Delay.

    Get PDF
    The interconnection network is a critical component in massively parallel architectures and in large communication networks. An important criterion in evaluating such networks is their transmission delay, which is determined to a large extent by the diameter of the underlying graph. The loop network is popular due to its simplicity, symmetry and expandability. By adding chords to the loop, the diameter and reliability are improved. In this work we deal with the problem of minimizing the diameter of double loop networks, which model various communication networks and also the Illiac type Mesh Connected Computer. A double loop network, (also known as circulant) G(n,h), consists of a loop of n vertices where each vertex i is also joined by chords to the vertices i ±\pm h mod n. D\sbsp{\rm n}{*}, the minimal diameter of G(n,h), is bounded below by k if n ∈\in R(k) = {\{2k\sp2 - 2k + 2,...,2k\sp2 + 2k + 1}\}. An integer n, a hop h and a network G(n,h) are called optimal (suboptimal) if Diam G(n,h) = D\sbsp{\rm n}{*} = k (k + 1). We determine new infinite families of optimal values of n, which considerably improve previously known results. These families are of several different types and cover more than 94% of all values of n up to ∌\sim8,000,000. We conjecture that all values of n are either optimal or suboptimal. Our analysis leads to the construction of an algorithm that detects optimal and suboptimal values of n. When run on a SUN workstation, it confirmed our conjecture within ∌\sim60 minutes, for all values of n up to ∌\sim8,000,000. Optimal (suboptimal) hops, corresponding to optimal (suboptimal) values of n, are provided by a simple construction

    Multiscale Modeling and Gaussian Process Regression for Applications in Composite Materials

    Get PDF
    An ongoing challenge in advanced materials design is the development of accurate multiscale models that consider uncertainty while establishing a link between knowledge or information about constituent materials to overall composite properties. Successful models can accurately predict composite properties, reducing the high financial and labor costs associated with experimental determination and accelerating material innovation. Whereas early pioneers in micromechanics developed simplistic theoretical models to map these relationships, modern advances in computer technology have enabled detailed simulators capable of accurately predicting complex and multiscale phenomena. This work advances domain knowledge via two means: firstly, through the development of high-fidelity, physics-based finite element (FE) models of composite microstructures that incorporate uncertainty in predictions, and secondly, through the development of a novel inverse analysis framework that enables the discovery of unknown or obscure constituent properties using literature data and Gaussian process (GP) surrogate models trained on FE model predictions. This work presents a generalizable approach to modeling a diverse array of composite subtypes, from a simple particulate system to a complex commercial composite. The inverse analysis framework was demonstrated for a thermoplastic composite reinforced by spherical fillers with unknown interphase properties. The framework leverages computer model simulations with easily obtainable macroscale elastic property measurements to infer interphase properties that are otherwise challenging to measure. The interphase modulus and thickness were determined for six different thermoplastic composites; four were reinforced by micron-scale particles and two with nano-scale particles. An alginate fiber embedded with a helically symmetric arrangement of cellulose nanocrystals (CNCs) was investigated using multiscale FE analysis to quantify microstructural uncertainty and the subsequent effect on macroscopic behavior. The macroscale uniaxial tensile simulation revealed that the microstructure induces internal stresses sufficient to rotate or twist the fiber about its axis. The reduction in axial elastic modulus for increases in CNC spiral angle was quantified in a sensitivity analysis using a GP surrogate modeling approach. A predictive model using GP regression was employed to investigate the link between input features and the mechanical properties of fiberglass-reinforced magnesium oxychloride (MOC) cement boards produced from a commercial process. The model evaluated the effect of formulation, crystalline phase compositions, and process control parameters on various mechanical performance metrics

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing

    Application of Genetic Algorithm in Multi-objective Optimization of an Indeterminate Structure with Discontinuous Space for Support Locations

    Get PDF
    In this thesis, an indeterminate structure was developed with multiple competing objectives including the equalization of the load distribution among the supports while maximizing the stability of the structure. Two different coding algorithms named “Continuous Method” and “Discretized Method” were used to solve the optimal support locations using Genetic Algorithms (GAs). In continuous method, a continuous solution space was considered to find optimal support locations. The failure of this method to stick to the acceptable optimal solution led towards the development of the second method. The latter approach divided the solution space into rectangular grids, and GAs acted on the index number of the nodal points to converge to the optimality. The average value of the objective function in the discretized method was found to be 0.147 which was almost onethird of that obtained by the continuous method. The comparison based on individual components of the objective function also proved that the proposed method outperformed the continuous method. The discretized method also showed faster convergence to the optima. Three circular discontinuities were added to the structure to make it more realistic and three different penalty functions named flat, linear and non-linear penalty were used to handle the constraints. The performance of the two methods was observed with the penalty functions while increasing the radius of the circles by 25% and 50% which showed no significant difference. Later, the discretized method was coded to eliminate the discontinuous area from the solution space which made the application of the penalty functions redundant. A paired t-test (α=5%) showed no statistical difference between these two methods. Finally, to make the proposed method compatible with irregular shaped discontinuous areas, “FEA Integrated Coded Discretized Method (FEAICDM)” was developed. The manual elimination of the infeasible areas from the candidate surface was replaced by the nodal points of the mesh generated by Solid Works. A paired t-test (α=5%) showed no statistical difference between these two methods. Though FEAICDM was applied only to a class of problem, it can be concluded that FEAICDM is more robust and efficient than the continuous method for a class of constrained optimization problem

    Hypercube-Based Topologies With Incremental Link Redundancy.

    Get PDF
    Hypercube structures have received a great deal of attention due to the attractive properties inherent to their topology. Parallel algorithms targeted at this topology can be partitioned into many tasks, each of which running on one node processor. A high degree of performance is achievable by running every task individually and concurrently on each node processor available in the hypercube. Nevertheless, the performance can be greatly degraded if the node processors spend much time just communicating with one another. The goal in designing hypercubes is, therefore, to achieve a high ratio of computation time to communication time. The dissertation addresses primarily ways to enhance system performance by minimizing the communication time among processors. The need for improving the performance of hypercube networks is clearly explained. Three novel topologies related to hypercubes with improved performance are proposed and analyzed. Firstly, the Bridged Hypercube (BHC) is introduced. It is shown that this design is remarkably more efficient and cost-effective than the standard hypercube due to its low diameter. Basic routing algorithms such as one to one and broadcasting are developed for the BHC and proven optimal. Shortcomings of the BHC such as its asymmetry and limited application are clearly discussed. The Folded Hypercube (FHC), a symmetric network with low diameter and low degree of the node, is introduced. This new topology is shown to support highly efficient communications among the processors. For the FHC, optimal routing algorithms are developed and proven to be remarkably more efficient than those of the conventional hypercube. For both BHC and FHC, network parameters such as average distance, message traffic density, and communication delay are derived and comparatively analyzed. Lastly, to enhance the fault tolerance of the hypercube, a new design called Fault Tolerant Hypercube (FTH) is proposed. The FTH is shown to exhibit a graceful degradation in performance with the existence of faults. Probabilistic models based on Markov chain are employed to characterize the fault tolerance of the FTH. The results are verified by Monte Carlo simulation. The most attractive feature of all new topologies is the asymptotically zero overhead associated with them. The designs are simple and implementable. These designs can lead themselves to many parallel processing applications requiring high degree of performance
    • 

    corecore