91,614 research outputs found
Modular Workflow Engine for Distributed Services using Lightweight Java Clients
In this article we introduce the concept and the first implementation of a
lightweight client-server-framework as middleware for distributed computing. On
the client side an installation without administrative rights or privileged
ports can turn any computer into a worker node. Only a Java runtime environment
and the JAR files comprising the workflow client are needed. To connect all
clients to the engine one open server port is sufficient. The engine submits
data to the clients and orchestrates their work by workflow descriptions from a
central database. Clients request new task descriptions periodically, thus the
system is robust against network failures. In the basic set-up, data up- and
downloads are handled via HTTP communication with the server. The performance
of the modular system could additionally be improved using dedicated file
servers or distributed network file systems.
We demonstrate the design features of the proposed engine in real-world
applications from mechanical engineering. We have used this system on a compute
cluster in design-of-experiment studies, parameter optimisations and robustness
validations of finite element structures.Comment: 14 pages, 8 figure
Global memory management in client-server systems
Ankara : Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 1995.Thesis (Master's) -- Bilkent University, 1995.Includes bibliographical references leaves 79-81.This thesis presents two techniques to iinpro\ e the performance of the global
memory management in client-server systems. The proposed memory management
techniques, called “Dropping Sent Pages'’ and “Forwarding Sent Pages”,
extend the previously proposed techniciues called “Forwarding”, “Hate Hints”,
and “Sending Dropped Pages”. The aim of all these techniques is to increase
the portion of the database available in the global memory, and thus to reduce
disk I/O. The performance of the proposed techniques is experimented using
a basic page-server client-server simulation model. The results obtained under
different workloads show that the memory management algorithm applying the
proposed techniques can exhibit better performance than the algorithms that
are based on previous methods.TĂĽrkan, YaseminM.S
Client-based Logging: A New Paradigm of Distributed Transaction Management
The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management.
Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed.
This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases
New Method of Measuring TCP Performance of IP Network using Bio-computing
The measurement of performance of Internet Protocol IP network can be done by
Transmission Control Protocol TCP because it guarantees send data from one end
of the connection actually gets to the other end and in the same order it was
send, otherwise an error is reported. There are several methods to measure the
performance of TCP among these methods genetic algorithms, neural network, data
mining etc, all these methods have weakness and can't reach to correct measure
of TCP performance. This paper proposed a new method of measuring TCP
performance for real time IP network using Biocomputing, especially molecular
calculation because it provides wisdom results and it can exploit all
facilities of phylogentic analysis. Applying the new method at real time on
Biological Kurdish Messenger BIOKM model designed to measure the TCP
performance in two types of protocols File Transfer Protocol FTP and Internet
Relay Chat Daemon IRCD. This application gives very close result of TCP
performance comparing with TCP performance which obtains from Little's law
using same model (BIOKM), i.e. the different percentage of utilization (Busy or
traffic industry) and the idle time which are obtained from a new method base
on Bio-computing comparing with Little's law was (nearly) 0.13%.
KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model
(Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table
- …