1,447 research outputs found
Application of Web Server Benchmark using Erlang/OTP RII and Linux
As the web grows and the amount of traffics on the
web server increase, problems related to performance begin to
appear. Some of the problems, such as the number of users that
can access the server simultaneously, the number of requests that
can be handled by the server per second (requests per second) to
bandwidth consumption and hardware utilization like memories
and CPU. To give better quality of service (QoS), web hosting
providers and also the system administrators and network
administrators who manage the server need a benchmark
application to measure the capabilities of their servers. Later, the
application intends to work under Linux/Unix - like platforms
and built using ErlanglOTP RI] as a concurrent oriented
language under Fedora Core Linux 5.0. It is divided into two
main parts, the controller section and the launcher section.
Controller is the core of the application. It has several duties,
such as read the benchmark scenario file, con figure the program
based on the scenario, initialize the launcher section, gather the
benchmark results from local and remote Erlang node where the
launcher runs and write them in a log file (later the log file will be
used to generate a report page for the sysadmin). Controller also
has function as a timer which act as timing for user inters arrival
to the server. Launcher generates a number of users based on the
scenario, initialize them and start the benchmark by sending
requests to the web server. The clients also gather the benchmark
result and send them to the controller
Memory Management Support for Multi-Programmed Remote Direct Memory Access (RDMA) Systems
Current operating systems offer basic support for network interface controllers (NICs) supporting remote direct memory access (RDMA). Such support typically consists of a device driver responsible for configuring communication channels between the device and user-level processes but not involved in data transfer. Unlike standard NICs, RDMA-capable devices incorporate significant memory resources for address translation purposes. In a multi-programmed operating system (OS) environment, these memory resources must be efficiently shareable by multiple processes. For such sharing to occur in a fair manner, the OS and the device must cooperate to arbitrate access to NIC memory, similar to the way CPUs and OSes cooperate to arbitrate access to translation lookaside buffers (TLBs) or physical memory. A problem with this approach is that today’s RDMA NICs are not integrated into the functions provided by OS memory management systems. As a result, RDMA NIC hardware resources are often monopolized by a single application. In this paper, I propose two practical mechanisms to address this problem: (a) Use of RDMA only in kernel-resident I/O subsystems, transparent to user-level software; (b) An extended registration API and a kernel upcall mechanism delivering NIC TLB entry replacement notifications to user-level libraries. Both options are designed to re-instate the multiprogramming principles that are violated in early commercial RDMA systems
Transient Faults in Computer Systems
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed
A Comprehensive Learning-Based Model for Power Load Forecasting in Smart Grid
In the big data era, learning-based techniques have attracted more and more attention in many industry areas such as smart grid, intelligent transportation. The power load forecasting is one of the most critical issues in data analysis of smart grid. However, learning-based methods have not been widely used due to the poor data quality and computational capacity. In this paper, we propose a comprehensive learning-based model to forecast heavy and over load (HOL) accidents according to the data from various information systems. At first, we present a combined random under- and over-sampling technique for imbalanced electric data, and choose an optimal sampling rate through several experiments. Then, we reduce the attributes that have significant impact on the power load by using learning-based methods. Finally, we provide an algorithm based on the random forest method to prevent the over-fitting problem. We evaluate the proposed model and algorithms with the real-world data provided by China Grid. The experimental results show that our model works efficiently and achieves low error rates
Data Replication and Its Alignment with Fault Management in the Cloud Environment
Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment.
In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time
Interactive data analysis and its applications on multi-structured datasets
Ph.DDOCTOR OF PHILOSOPH
Spartan Daily, March 10, 1972
Volume 59, Issue 79https://scholarworks.sjsu.edu/spartandaily/5592/thumbnail.jp
- …