96 research outputs found

    A proposal of an infrastructure for load-balancing transactions on electronic funds transfer systems

    Get PDF
    This article aims to present the first ideas for developing a framework for load-balancing called GetLB. Considering the electronic funds transfer (EFT) context, GetLB offers a new scheduling heuristic that optimizes the selection of Processing Machines to execute transactions in a processing center. Instead of using the Round-Robin typical approach, the proposal combines data from computation, network, memory and disc metrics for producing a unified scheduling approach, denoted LL (i,j). The proposal calculates the load level of executing an i-typed transaction on a j specific Processing Machine. Furthermore, the load-balancing framework also enables notifications triggered by Processing Machines to the Dispatcher for informing it about asynchronous events such as administrative tasks or transactions disposing. Aiming to evaluate GetLB, a simple prototype was developed by using Java RMI. Preliminary tests revealed that the framework is feasible, outperforming the number of queued transactions obtained with the Round-Robin approach.Keywords: Electronic Funds Transfer, transactions, load balancing, remote method invocatio

    MAINTENANCE POLICY AND ITS IMPACT ON THE PERFORMABILITY EVALUATION OF EFT SYSTEMS

    Get PDF
    ABSTRACT In the Electronic Funds Transfer (EFT) System

    DETERMINATION OF END-TO-END DELAYS OF SWITCHED ETHERNET LOCAL AREA NETWORKS

    Get PDF
    The design of switched local area networks in practice has largely been based on heuristics and experience; in fact, in many situations, no network design is carried out, but only network installation (network cabling and nodes/equipment placements). This has resulted in local area networks that are sluggish, and that fail to satisfy the users that are connected to such networks in terms of speed of uploading and downloading of information, when, a user’s computer is in a communication session with other computers or host machines that are attached to the local area network or with switching devices that connect the local area network to wide area networks. Therefore, the need to provide deterministic guarantees on the delays of packets’ flows when designing switched local area networks has led to the need for analytic and formal basis for designing such networks. This is because, if the maximum packet delay between any two nodes of a network is not known, it is impossible to provide a deterministic guarantee of worst case response time of packets’ flows. This is the problem that this research work set out to solve. A model of a packet switch was developed, with which the maximum delay for a packet to cross any N-ports packet switch can be calculated. The maximum packet delay value provided by this model was compared from the point of view of practical reality to values that were obtained from literature, and was found to be by far a more realistic value. An algorithm with which network design engineers can generate optimum network designs in terms of installed network switches and attached number of hosts while respecting specified maximum end-to-end delay constraints was developed. This work revealed that the widely held notion in the literature as regards origin-destination pairs of hosts enumeration for end-to-end delay computation appears to be wrong in the context of switched local area networks. We have for the first time shown how this enumeration should be done. It has also been empirically shown in this work that the number of hosts that can be attached to any switched local area network is actually bounded by the number of ports in the switches of which the network is composed. Computed numerical values of maximum end-to-end delays using the developed model and algorithm further revealed that the predominant cause of delay (sluggishness) in switched local area networks is the queuing delay, and not the number of users (hosts) that are connected to the networks. The fact that a switched local area network becomes slow as more users are logged on to it is as a result of the flow of bursty traffic (uploading and downloading of high-bit rates and bandwidth consuming applications). We have also implemented this work’s model and algorithms in a developed C programming language-based inter-active switched local area networks’ design application program. Further studies were recommended on the need to develop method(s) for determining the maximum amount of traffic that can arrive to a switch in a burst, on the need for the introduction of weighting function(s) in the end-to-end delay computation models; and on the need to introduce cost variables in determining the optimal Internet access device input and output rates specifications

    Performance of national research and education network while transmitting healthcare information data

    Get PDF
    National Research and Education Network(NREN) is a first step in the process of building a National Information Infrastructure(NII). NII will ultimately lead to Broadband Integrated Services Digital Network(B-ISDN), a network which will support global exchange of voice, data, images and video. NREN will evolve out of present National Science Foundation Network (NSFNET), also known as Interim NREN. At present NSFNET operates at 45 MBits/sec. By 1996, there are plans to boost the data rate to 155 MBits/sec and to 620 MBits/sec by the turn of century. In this thesis, present actual NSFNET is simulated using software simulation tool Networkll.5. The simulation is carried out for different Baud rates like 45 MBits/sec, 155 MBits/sec and 620 MBits/sec with increase in load or data rate traveling on the bus. The simulation helps in predicating the evolution and behavior of NREN. Ultimately, NREN is modeled as transmitting healthcare information data and simulation is carried out for different Baud rates. In this model, NREN is transmitting both, the healthcare data for different types of services and other regular non-healthcare data. Thus, research on National Research and Education Network evolution is carried out. The applications for such networks are expected to expand very rapidly, once these networks are available

    Revenue maximization problems in commercial data centers

    Get PDF
    PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco

    Workload characterization and customer interaction at e-commerce web servers

    Get PDF
    Electronic commerce servers have a significant presence in today's Internet. Corporations want to maintain high availability, sufficient capacity, and satisfactory performance for their E-commerce Web systems, and want to provide satisfactory services to customers. Workload characterization and the analysis of customers' interactions with Web sites are the bases upon which to analyze server performance, plan system capacity, manage system resources, and personalize services at the Web site. To date, little empirical evidence has been discovered that identifies the characteristics for Web workloads of E-commerce systems and the behaviours of customers. This thesis analyzes the Web access logs at public Web sites for three organizations: a car rental company, an IT company, and the Computer Science department of the University of Saskatchewan. In these case studies, the characteristics of Web workloads are explored at the request level, functionlevel, resource level, and session level; customers' interactions with Web sites are analyzed by identifying and characterizing session groups. The main E-commerce Web workload characteristics and performance implications are: i) The requests for dynamic Web objects are an important part of the workload. These requests should be characterized separately since the system processes them differently; ii) Some popular image files, which are embedded in the same Web page, are always requested together. If these files are requested and sent in a bundle, a system will greatly reduce the overheads in processing requests for these files; iii) The percentage of requests for each Web page category tends to be stable in the workload when the time scale is large enough. This observation is helpful in forecasting workload composition; iv) the Secure Socket Layer protocol (SSL) is heavily used and most Web objects are either requested primarily through SSL or primarily not through SSL; and v) Session groups of different characteristics are identified for all logs. The analysis of session groups may be helpful in improving system performance, maximizing revenue throughput of the system, providing better services to customers, and managing and planning system resources. A hybrid clustering algorithm, which is a combination of the minimum spanning tree method and k-means clustering algorithm, is proposed to identify session clusters. Session clusters obtained using the three session representations Pages Requested, Navigation Pattern, and Resource Usage are similar enough so that it is possible to use different session representations interchangeably to produce similar groupings. The grouping based on one session representation is believed to be sufficient to answer questions in server performance, resource management, capacity planning and Web site personalization, which previously would have required multiple different groupings. Grouping by Pages Requested is recommended since it is the simplest and data on Web pages requested is relatively easy to obtain in HTTP logs

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    Revenue maximization problems in commercial data centers

    Get PDF
    As IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.EThOS - Electronic Theses Online ServiceQoSP (Quality of Service Provisioning) : British TelecomGBUnited Kingdo

    Corporate influence and the academic computer science discipline. [4: CMU]

    Get PDF
    Prosopographical work on the four major centers for computer research in the United States has now been conducted, resulting in big questions about the independence of, so called, computer science

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing
    • …
    corecore