28 research outputs found
An Enhanced Bully Algorithm for Electing a Coordinator in Distributed Systems
In a distributed system for accomplishing a large complex task, the task is divided into subtask and distributed among processes and coordination among processes done via message passing. To make proper coordination and functioning we need a leader node or coordinator node which acts as a centralized control node. Leader election is the most challenging task in distributed system because it is not necessary that leader node is always same because of crash failure or out of service may occur in the system. Tremendous algorithms have been proposed for elect the new leader. These algorithms use a different technique to elect a leader in distributed system. Bully election algorithm is one of the traditional algorithms for electing a leader, in which the highest node Id is elected as a leader but this algorithm requires lots of message passing for electing a leader that imposes heavy network traffic. Due to heavy network traffic, it creates complexity in message passing and takes more time. In this paper, we introduce a new approach which overcomes the drawback of existing Bully election algorithm. Our proposed algorithm is an enhanced version of Bully election algorithm. Our analytical result shows that our algorithm is more efficient than original Bully Algorithm
Evaluation of Basic Data Compression Algorithms in a Distributed Environment
Data compression methods aim at reducing the data size in order to make data transfer more efficient. To accomplish data compression, the basic algorithms such as Huffman, Lempel-Ziv (LZ), Shannon-Fano (SF) and Run-Length Encoding (RLE) are widely used. Most of the applications incorporate different variants of these algorithms. This paper analyzes the execution times, compression ratio and efficiency of compression methods in a client-server distributed environment. The data from a client is distributed to multiple processors/servers, subsequently compressed by the servers at remote locations, and sent back to the client. Our experimentation has been carried out using Simgrid Framework. Our results show that the LZ algorithm attains better efficiency/scalability and compression ratio, however, it works slower than other algorithms
Fragmentation Analysis For Scalable Wireless Local Area Networks
Wireless networks are being deployed widely to provide network connectivity without requiring the web of physical wires. A collection of a small number of workstations connected using a wireless network forms a wireless local area network (WLAN) that follows the IEEE 802.11 standard. In a WLAN, the communication takes place using packets whose sizes may vary and have a significant impact on the delay incurred during transmission. In this regard, fragmentation may play a vital role in reducing the delay for efficient transmission across the network. This paper analyzes the performance of WLANs with respect to the packet fragmentation. We simulate three network scenarios having 4, 8 and 12 wireless workstations respectively. The scenarios are simulated using OPNET IT Guru Academic Edition v 9.1 while incorporating a peer-to-peer (P2P) based communication model for each scenario. We compare the performance of non-fragmented and fragmented communication in terms of network delay and throughput. Our results show that the fragmentation minimizes the delay and increases the throughput, however its impact is highly dependent on the size of the underlying network
A Quantitative Analysis of Firewall Impact on Critical Data Communication
Multimedia communication is considered to engulf the entire transmission taking place through internet. Most of the applications running on clients communicating through internet incorporate video or audio data transmission. Such transmission may however hinder the performance of other critical applications running on the network. For instance, the clients connecting to a database may suffer large delays if the network bandwidth is being utilized for multimedia communication. In this regards, a firewall may be used to block the non-critical and unnecessary communication.In this paper, we perform a quantitative analysis to record the impact of a firewall deployed in a network. We develop various network scenarios with voice and video data being transmitted in parallel with queries from a database client. As the database application is critical for its clients, the unnecessary communication causing the wastage of bandwidth is blocked through a firewall. We record the improvement in the performance of the database application due to the usage of firewall. We simulate all the scenarios using OPNET IT Guru v 9.1. Our results show that due to the blocking of video transmission, there is a significant improvement in performance of the database application. We also find that the use of a firewall has an overhead that depends mainly on the amount of communication taking place simultaneously and can also impact the performance of the critical application
A Quantitative Analysis of Firewall Impact on Critical Data Communication
Multimedia communication is considered to engulf the entire transmission taking place through internet. Most of the applications running on clients communicating through internet incorporate video or audio data transmission. Such transmission may however hinder the performance of other critical applications running on the network. For instance, the clients connecting to a database may suffer large delays if the network bandwidth is being utilized for multimedia communication. In this regards, a firewall may be used to block the non-critical and unnecessary communication.In this paper, we perform a quantitative analysis to record the impact of a firewall deployed in a network. We develop various network scenarios with voice and video data being transmitted in parallel with queries from a database client. As the database application is critical for its clients, the unnecessary communication causing the wastage of bandwidth is blocked through a firewall. We record the improvement in the performance of the database application due to the usage of firewall. We simulate all the scenarios using OPNET IT Guru v 9.1. Our results show that due to the blocking of video transmission, there is a significant improvement in performance of the database application. We also find that the use of a firewall has an overhead that depends mainly on the amount of communication taking place simultaneously and can also impact the performance of the critical application
Fragmentation Analysis For Scalable Wireless Local Area Networks
Wireless networks are being deployed widely to provide network connectivity without requiring the web of physical wires. A collection of a small number of workstations connected using a wireless network forms a wireless local area network (WLAN) that follows the IEEE 802.11 standard. In a WLAN, the communication takes place using packets whose sizes may vary and have a significant impact on the delay incurred during transmission. In this regard, fragmentation may play a vital role in reducing the delay for efficient transmission across the network. This paper analyzes the performance of WLANs with respect to the packet fragmentation. We simulate three network scenarios having 4, 8 and 12 wireless workstations respectively. The scenarios are simulated using OPNET IT Guru Academic Edition v 9.1 while incorporating a peer-to-peer (P2P) based communication model for each scenario. We compare the performance of non-fragmented and fragmented communication in terms of network delay and throughput. Our results show that the fragmentation minimizes the delay and increases the throughput, however its impact is highly dependent on the size of the underlying network
Evaluation of Basic Data Compression Algorithms in a Distributed Environment
Data compression methods aim at reducing the data size in order to make data transfer more efficient. To accomplish data compression, the basic algorithms such as Huffman, Lempel-Ziv (LZ), Shannon-Fano (SF) and Run-Length Encoding (RLE) are widely used. Most of the applications incorporate different variants of these algorithms. This paper analyzes the execution times, compression ratio and efficiency of compression methods in a client-server distributed environment. The data from a client is distributed to multiple processors/servers, subsequently compressed by the servers at remote locations, and sent back to the client. Our experimentation has been carried out using Simgrid Framework. Our results show that the LZ algorithm attains better efficiency/scalability and compression ratio, however, it works slower than other algorithms
Code specialization strategies for high performance architectures
Many applications are unable to obtain the peak performance offered by high performance architectures such as Itanium or Pentium-IV. This fact makes the code optimizations to be of utmost importance. Code specialization, which provides to the compilers, necessary information regarding important parameters in the code, is considered to be one of the most effective optimizations.Static specialization of code results in large code size, also referred to as, code explosion. Such large size of code results in cache misses and branch overhead, and also minimizes the effect of other optimizations. All these drawbacks deteriorate the performance of the application and necessitate the code to be specialized dynamically. The specialization of code is therefore performed by dynamic compilers and/or specializers by generating code at runtime, i.e. during execution of the program. The runtime specialization is not always beneficial since the runtime activities incur a large overhead during execution. This overhead can only be amortized by multiple invocations of the same code. Aimed at improving the performance of the applications, this thesis provides different strategies for specialization of code. By specializing code through static, dynamic and iterative compilation, we target the issues of code explosion and runtime overhead. Our Hybrid Specialization approach proceeds by specializing code and finding equivalent code versions. Instead of keeping all versions, any of these versions can be used as a template whose instructions are modified at runtime to adapt it to other versions. The performance is improved since the code is specialized at static compile time. The runtime specialization is therefore limited to modifying a small number of instructions. Different variants of these approaches address the issues of selection of variables for specialization, minimizing the number of compilations and reducing the frequency of runtime specialization.Our Iterative Specialization approach is able to optimize regular code by obtaining different optimization classes of some code which is specialized at static compile time. The code is iteratively transformed to benefit from these optimization classes and evaluated in order to obtain the best version.These approaches are portable and tested on high performance architectures like IA-64 and Pentium-IV using different versions of \textit{icc} and \textit{gcc} compilers. Using hybrid specialization and iterative specialization approaches, we are able to obtain a significant improvement in many complex benchmarks including SPEC, FFTW and ATLAS.De nombreuses applications sont incapables d'utiliser les performances crĂȘtes offertes par des architectures modernes comme l'Itanium et Pentium-IV. Cela rend critique les optimisations de code rĂ©alisĂ©e par les compilateurs. Parmis toutes les optimisations rĂ©alisĂ©es par les compilateurs, la spĂ©cialisation de code, qui fournit aux compilateurs les valeurs des paramĂštres importants dans le code, est trĂšs efficace. La spĂ©cialisation statique a comme dĂ©fault de produire une grande taille du code, appelĂ©e, l'explosion du code. Cette grande taille implique des dĂ©faults de caches et des coĂ»ts de branchements. Elle mĂȘme impose des contraintes sur d'autres optimisations. Tous ces effets rendent nĂ©cessaire de spĂ©cialiser le code dynamiquement. La spĂ©cialisation de code est donc effectuĂ© par lescompilateurs/specialiseurs dynamiques, qui gĂ©nĂ©rent le code Ă l'exĂ©cution. Ces approches ne sont pas toujours bĂ©nĂ©fique puisque l'exĂ©cution doit subir un grand surcoĂ»t de gĂ©neration Ă l'exĂ©cution qui peut dĂ©tĂ©riorer la performance. De plus, afin d'ĂȘtre amorti, ce coĂ»t exige plusieurs invocations du mĂȘme code.Visant Ă amĂ©liorer les performances des applications complexes, cettethĂšse propose diffĂ©rentes stratĂ©gies pour la spĂ©cialisation du code. En utilisant la compilation statique, dynamique et itĂ©rative, nous ciblons les problĂšmes d'explosion de la taille du code et le surcoĂ»t en temps induit par la gĂ©nĂ©ration du code Ă l'exĂ©cution. Notre "SpĂ©cialisation Hybride" gĂ©nĂšre des versions Ă©quivalentes du code aprĂšs l'avoir specialisĂ© statiquement. Au lieu de conserver toutes les versions, l'une de ces versions peut ĂȘtre utilisĂ©e comme un template dont les instructions sont modifiĂ©es pendant exĂ©cution afin d'ĂȘtre adaptĂ©e Ă d'autres versions. La performance est amĂ©liorĂ©e puisque le code est spĂ©cialisĂ© au moment de la compilation statique. La spĂ©cialisation dynamique est donc limitĂ©e Ă la modification d'un petit nombre d'instructions.DiffĂ©rentes variantes de ces approches peuvent amĂ©liorer laspĂ©cialisation en choisissant des variables adĂ©quates, en diminuant le nombre de compilations et en rĂ©duisant la frĂ©quence de laspĂ©cialisation dynamique. Notre approche "SpĂ©cialisation ItĂ©rative" est en mesure d'optimiser les codes rĂ©gulier en obtenant plusieurs classes optimales du code spĂ©cialisĂ© au moment de la compilation statique. En suite, une transformation itĂ©rative est appliquĂ©e sur le code afin de bĂ©nĂ©ficier des classes optimales gĂ©nĂ©rĂ©es et obtenir la meilleure version. Les expĂ©rimentations ont Ă©tĂ© effectuĂ©es sur des architectures IA-64 et Pentium- IV, en utilisant les compilateurs gcc et icc. Les approches proposĂ©es (SpĂ©cialisation Hybride et ItĂ©rative), nous permettent d'obtenir une amĂ©lioration significative pour plusieurs benchmarks, y compris ceux de SPEC, FFTW et ATLAS.VERSAILLES-BU Sciences et IUT (786462101) / SudocSudocFranceF
JSOPT: A Framework for Optimization of JavaScript on Web Browsers
In the current era where multi-core technologies are very common in use, the existing web browsers are unable to fully utilize the capability of multi-core processors.The web browsers execute the JavaScript code locally in order to produce an efficient response of web pages. This responsiveness is however limited by the fact that the JavaScript code is uni-threaded, and consequently, the efficiency of the code degrades if it involves a large number of computations. In this paper, we propose a framework called JSOPT (JavaScript Optimizer) which generates an efficient JavaScript code to effectively utilize multi-core architectures. The framework uses a template containing constructs for communication and synchronization, and subsequently generates optimized code to be executed on the multi-core architectures. Multiple instances of templates are then generated with different implementations of the code and the best instance is selected to be incorporated in the library. With the optimized code generated using JSOPT, our results show a significant improvement in the performance
of several benchmarks involving intensive computations based matrix operations on the Mozilla Firefox web browser