9 research outputs found

    Cooperative Modeling and Design History Tracking Using Design Tracking Matrix

    Get PDF
    This thesis suggests a new framework for cooperative modeling which supports concurrency design protocol with a design history tracking function. The proposed framework allows designers to work together while eliminating design conflicts and redundancies, and preventing infeasible designs. This framework provides methods to track optimal design path and redundant design history in the overall design process. This cooperative modeling architecture consists of a modeling server and voxel-based multi-client design tool. Design change among server and multiple clients are executed using the proposed concurrency design protocol. The design steps are tracked and analyzed using Design Tracking Graph and Design Tracking Matrix (DTM), which provide a design data exchange algorithm allowing seamless integration of design modifications between participating designers. This framework can be used for effective cooperative modeling, and helps identify and eliminate conflicts and minimize delay. The proposed algorithm supports effective cooperative design functions. First, it provides a method to obtain the optimal design path which can be stored in a design library and utilized in the future design. Second, it helps capture modeling pattern which can be used for analyzing designer's performance. Finally, obtained redundancies can be used to evaluate designer?s design efficiency

    Uma infra-estrutura baseada em FT-Corba para o desenvolvimento de aplicações distribuídas confiáveis

    Get PDF
    Orientador : Elias Procópio Duarte JrDissertação (mestrado) - Universidade Federal do ParanáResumo: O padrão CORBA (Common Object Request Broker Architecture) possibilita a construção de sistemas distribuídos abertos em um modelo orientado a objetos. A especificação para tolerância a falhas do CORBA (FT-CORBA - Fault Tolerant CORBA) tem por objetivo fornecer suporte para aplicações que necessitam de confiabilidade. Neste trabalho apresentamos a implementação de uma infraestrutura baseada em FT-CORBA que permite a construção de aplicações distribuídas confiáveis baseadas em grupos de servidores replicados. Duas abordagens para monitoração das réplicas foram implementadas. Na primeira abordagem, apenas o servidor primário de cada grupo é periodicamente monitorado. Somente em caso de falha da réplica primária a monitoração das demais réplicas é efetuada sendo então um novo servidor primário eleito para o grupo de objetos. Na segunda abordagem, o processo de monitoração é feito periodicamente para todos os objetos de todos os grupos. Uma réplica falha é desconsiderada no momento da eleição de um novo membro primário para um grupo de objetos. Resultados experimentais mostram que a escolha do método de monitoração deve ser feita após uma avaliação do impacto de cada estratégia considerando o número total de réplicas monitoradas, bem como a banda disponível na rede.Abstract: The CORBA Standard (Common Object Request Broker Architecture) allows the construction of open distributed applications based on the object oriented paradigm. The CORBA fault tolerance specification (FT-CORBA - Fault Tolerant CORBA) aims to support applications that need reliability. In this work we present the implementation of an infrastructure based on FT-CORBA that allows the construction of trustworthy distributed applications based on groups of replicated servers. Two monitoring approaches for the replicas were implemented. In the first approach only the primary server of each group is periodically monitored. Only in case of the primary replica being faulty, the other replicas are monitored, and then a new primary server is chosen for the group of objects. In the second approach the monitoring process is performed periodically for all objects of all groups. A faulty replica is dismissed when choosing a new primary member for a group of objects. Experimental results show that the choice of the monitoring method must be made after an evaluation of the impact of each strategy considering the total number of monitored replicas, as well as the network bandwidth

    Uma abordagem distribuída baseada no algoritmo do carteiro chinês para diagnóstico de redes de topologia arbitrária

    Get PDF
    Orientador: Elias Procópio Duarte JrDissertação (mestrado) - Universidade Federal do ParanáResumo: Neste trabalho é apresentado um novo algoritmo para o diagnóstico distribuído de redes de topologia arbitrária baseado no algoritmo do Carteiro Chinês. Um agente móvel, isto é, um processo que é executado e transmitido entre os nodos da rede, percorre seqüencialmente todos os enlaces, de acordo com o caminho determinado pelo algoritmo do Carteiro Chinês. O agente, chamado Agente Chinês, vai testando os enlaces detectando novos eventos e disseminando as informações obtidas para os demais nodos da rede. Quando todos os nodos do sistema recebem a informação sobre um evento, o diagnóstico se completa. Neste trabalho assume-se que falhas não particionam a rede, e que um novo evento só ocorre após o diagnóstico do evento anterior. São apresentadas provas rigorosas do melhor e pior caso da latência do algoritmo, isto é, o tempo necessário para completar um diagnóstico. São apresentados também resultados experimentais obtidos através da simulação do algoritmo em vários tipos diferentes de topologia, dentre eles, hipercubos de 16, 64 e 128 vértices, grafo D 1,2 com 9 vértices, além de um grafo randômico com 50 vértices e probabilidade de aresta igual a 10%, e a topologia da Rede Nacional de Pesquisa (RNP). São simuladas falhas de um enlace em cada grafo, são medidos o número de mensagens geradas e o tempo necessário para que o diagnóstico se complete. Os resultados indicam que o tempo necessário para realizar o diagnóstico é, na média, menor que o pior caso apresentado, e que o número de mensagens disseminadas é freqüentemente menor que o requerido por outros algoritmos semelhantes.Abstract: This work presents a new algorithm for distributed diagnosis of general topology networks. A mobile agent visits all links sequentially, following the path generated by the Chinese Postman algorithm. The agent, called Chinese Agent, tests the links detecting new events and disseminates event information to the rest of the network. When all nodes of the system receive the information about the event, the diagnosis is complete. This work assumes that faults do not partition the network, and that a new event only occurs after the previous event has been diagnosed. Rigorous proofs of the best and the worst case of the latency of the algorithm are presented. Experimental results are also presented which were obtained from the simulation of the algorithm on different types of topologies, like hypercubes with 16, 64 and 128 nodes, the D 1,2 graph with 9 nodes, a random graph with 50 nodes and link probability equal to 10%, and the Brazilian National Research Network (RNP) topology. One link fault is simulated in each graph, both the number of messages and the algorithm's latency were measured. The results show that the time necessary to complete the diagnosis is, in average, smaller than the worst case. A comparison with other algorithms shows that the number of messages generated by the proposed algorithm is frequently smaller than the number of messages required by others similar algorithms

    New fault-tolerant routing algorithms for k-ary n-cube networks

    Get PDF
    The interconnection network is one of the most crucial components in a multicomputer as it greatly influences the overall system performance. Networks belonging to the family of k-ary n-cubes (e.g., tori and hypercubes) have been widely adopted in practical machines due to their desirable properties, including a low diameter, symmetry, regularity, and ability to exploit communication locality found in many real-world parallel applications. A routing algorithm specifies how a message selects a path to cross from source to destination, and has great impact on network performance. Routing in fault-free networks has been extensively studied in the past. As the network size scales up the probability of processor and link failure also increases. It is therefore essential to design fault-tolerant routing algorithms that allow messages to reach their destinations even in the presence of faulty components (links and nodes). Although many fault-tolerant routing algorithms have been proposed for common multicomputer networks, e.g. hypercubes and meshes, little research has been devoted to developing fault-tolerant routing for well-known versions of k-ary n-cubes, such as 2 and 3- dimensional tori. Previous work on fault-tolerant routing has focused on designing algorithms with strict conditions imposed on the number of faulty components (nodes and links) or their locations in the network. Most existing fault-tolerant routing algorithms have assumed that a node knows either only the status of its neighbours (such a model is called local-information-based) or the status of all nodes (global-information-based). The main challenge is to devise a simple and efficient way of representing limited global fault information that allows optimal or near-optimal fault-tolerant routing. This thesis proposes two new limited-global-information-based fault-tolerant routing algorithms for k-ary n-cubes, namely the unsafety vectors and probability vectors algorithms. While the first algorithm uses a deterministic approach, which has been widely employed by other existing algorithms, the second algorithm is the first that uses probability-based fault- tolerant routing. These two algorithms have two important advantages over those already existing in the relevant literature. Both algorithms ensure fault-tolerance under relaxed assumptions, regarding the number of faulty components and their locations in the network. Furthermore, the new algorithms are more general in that they can easily be adapted to different topologies, including those that belong to the family of k-ary n-cubes (e.g. tori and hypercubes) and those that do not (e.g., generalised hypercubes and meshes). Since very little work has considered fault-tolerant routing in k-ary n-cubes, this study compares the relative performance merits of the two proposed algorithms, the unsafety and probability vectors, on these networks. The results reveal that for practical number of faulty nodes, both algorithms achieve good performance levels. However, the probability vectors algorithm has the advantage of being simpler to implement. Since previous research has focused mostly on the hypercube, this study adapts the new algorithms to the hypercube in order to conduct a comparative study against the recently proposed safety vectors algorithm. Results from extensive simulation experiments demonstrate that our algorithms exhibit superior performance in terms of reachability (chances of a message reaching its destination), deviation from optimality (average difference between minimum distance and actual routing distance), and looping (chances of a message continuously looping in the network without reaching destination) to the safety vectors

    Privacy-preserving information hiding and its applications

    Get PDF
    The phenomenal advances in cloud computing technology have raised concerns about data privacy. Aided by the modern cryptographic techniques such as homomorphic encryption, it has become possible to carry out computations in the encrypted domain and process data without compromising information privacy. In this thesis, we study various classes of privacy-preserving information hiding schemes and their real-world applications for cyber security, cloud computing, Internet of things, etc. Data breach is recognised as one of the most dreadful cyber security threats in which private data is copied, transmitted, viewed, stolen or used by unauthorised parties. Although encryption can obfuscate private information against unauthorised viewing, it may not stop data from illegitimate exportation. Privacy-preserving Information hiding can serve as a potential solution to this issue in such a manner that a permission code is embedded into the encrypted data and can be detected when transmissions occur. Digital watermarking is a technique that has been used for a wide range of intriguing applications such as data authentication and ownership identification. However, some of the algorithms are proprietary intellectual properties and thus the availability to the general public is rather limited. A possible solution is to outsource the task of watermarking to an authorised cloud service provider, that has legitimate right to execute the algorithms as well as high computational capacity. Privacypreserving Information hiding is well suited to this scenario since it is operated in the encrypted domain and hence prevents private data from being collected by the cloud. Internet of things is a promising technology to healthcare industry. A common framework consists of wearable equipments for monitoring the health status of an individual, a local gateway device for aggregating the data, and a cloud server for storing and analysing the data. However, there are risks that an adversary may attempt to eavesdrop the wireless communication, attack the gateway device or even access to the cloud server. Hence, it is desirable to produce and encrypt the data simultaneously and incorporate secret sharing schemes to realise access control. Privacy-preserving secret sharing is a novel research for fulfilling this function. In summary, this thesis presents novel schemes and algorithms, including: • two privacy-preserving reversible information hiding schemes based upon symmetric cryptography using arithmetic of quadratic residues and lexicographic permutations, respectively. • two privacy-preserving reversible information hiding schemes based upon asymmetric cryptography using multiplicative and additive privacy homomorphisms, respectively. • four predictive models for assisting the removal of distortions inflicted by information hiding based respectively upon projection theorem, image gradient, total variation denoising, and Bayesian inference. • three privacy-preserving secret sharing algorithms with different levels of generality

    Faculty Publications & Presentations, 2008-2009

    Get PDF

    Performance analysis of wormhole routing in multicomputer interconnection networks

    Get PDF
    Perhaps the most critical component in determining the ultimate performance potential of a multicomputer is its interconnection network, the hardware fabric supporting communication among individual processors. The message latency and throughput of such a network are affected by many factors of which topology, switching method, routing algorithm and traffic load are the most significant. In this context, the present study focuses on a performance analysis of k-ary n-cube networks employing wormhole switching, virtual channels and adaptive routing, a scenario of especial interest to current research. This project aims to build upon earlier work in two main ways: constructing new analytical models for k-ary n-cubes, and comparing the performance merits of cubes of different dimensionality. To this end, some important topological properties of k-ary n-cubes are explored initially; in particular, expressions are derived to calculate the number of nodes at/within a given distance from a chosen centre. These results are important in their own right but their primary significance here is to assist in the construction of new and more realistic analytical models of wormhole-routed k-ary n-cubes. An accurate analytical model for wormhole-routed k-ary n-cubes with adaptive routing and uniform traffic is then developed, incorporating the use of virtual channels and the effect of locality in the traffic pattern. New models are constructed for wormhole k-ary n-cubes, with the ability to simulate behaviour under adaptive routing and non-uniform communication workloads, such as hotspot traffic, matrix-transpose and digit-reversal permutation patterns. The models are equally applicable to unidirectional and bidirectional k-ary n-cubes and are significantly more realistic than any in use up to now. With this level of accuracy, the effect of each important network parameter on the overall network performance can be investigated in a more comprehensive manner than before. Finally, k-ary n-cubes of different dimensionality are compared using the new models. The comparison takes account of various traffic patterns and implementation costs, using both pin-out and bisection bandwidth as metrics. Networks with both normal and pipelined channels are considered. While previous similar studies have only taken account of network channel costs, our model incorporates router costs as well thus generating more realistic results. In fact the results of this work differ markedly from those yielded by earlier studies which assumed deterministic routing and uniform traffic, illustrating the importance of using accurate models to conduct such analyses

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets
    corecore