12 research outputs found

    Building a model for automated and improved utilization of existing server resources

    Get PDF
    Zbog ubrzanog razvoja informacijske tehnologije su nastali složeni sustavi kao što su računarstvo u oblaku. Navedeni sustavi najčešće moraju imati visoku razinu dostupnosti podataka, odnosno moraju osigurati neprekidni rad poslovnih sustava. Kako bi se to postiglo prisutni su visoki kapitalni i operativni troškovi podatkovnih centara koji su neophodni za ovakvu vrstu usluga. Mnogobrojna istraživanja na ovu temu ukazuju kako su poslužitelji glavni uzročnici visokih troškova podatkovnih centara. Upravo se zbog toga njihovi resursi nastoje što učinkovitije iskoristiti. U istraživanju su navedene Web-farme kao primjer iz prakse koji potvrđuje da postoje sustavi čiji poslužitelji nedovoljno iskorištavaju svoje resurse, ali ih moraju imati kako bi osigurali visoku razinu dostupnosti sustava. Spomenuti visoki troškovi i nedovoljna iskoristivost postojećih računalnih resursa su glavna motivacija za ovo istraživanje. Nakon proučavanja dosadašnjih znanstvenih istraživanja, ali i rješenja iz prakse, utvrđeno je da ne postoji rješenje koje bi dovoljno učinkovito riješilo ovaj problem. U radu se predlaže novi model za automatiziranu i poboljšanu iskoristivost postojećih računalnih resursa bez potrebe za ponovnim pokretanjem poslužitelja koji rješava navedeni problem. Na temelju modela je napravljena aplikacija koja je validirana na primjeru Web-poslužitelja gdje je ovaj problem prepoznat. U radu se koristi istraživačka paradigma znanost o dizajnu (engl. Design Science Research Methodology, DSRM), koja se temelji na kreiranju novog artefakta što u ovom slučaju predstavlja novi model.Information technology is under constant innovation pressure to provide the highest level of data availability, i.e. the continuous functioning of operating systems. This is the very reason for an accelerated development of complex systems called cloud computing. One of the tasks of such solutions is to ensure high-level availability of complex systems and architecture. In order for such solutions to function properly, the high capital and operational costs of data centers are essential for this type of service. There are numerous studies which indicate that servers are the main cause of data centers’ high cost. As a result, the aim is to use servers, i.e. their resources more efficiently. This paper shows the examples from practice which confirm that there are systems whose servers insufficiently exploit their resources, but they must have them due to their importance. A concrete example of this problem are the Web Farms where, in order to achieve greater system availability, there is a greater amount of resources than is really needed, as confirmed by tools for measuring server loads. This approach allows the system to withstand sudden loads, which increases the level of system availability. The negative effect of such an approach is the increase in capital and operating costs due to a higher amount of computer resources. The mentioned high costs and inadequate utilization of the existing computer resources are at the same time the main motivation for this research. To solve this problem, it is necessary to have a system which would automatically allocate as much computer resources as the system needs, depending on its load and thereby taking into account its availability and consistency. During a detailed study of the current scientific research, as well as practical solutions, it has been found that there is no effective solution to this problem, which also served as an additional motivation for this research to be carried out. The existing solutions are lacking in that they are not dealing with how to use the existing resources more efficiently, but in adding new or migrating virtual servers to other physical servers in critical situations, which requires even larger numbers of computer resources. The second approach to solving this problem is process prioritization, i.e. that servers with the greatest need for resources are given the highest priority in the execution of the process. The disadvantage of this approach is that resources cannot be increased nor decreased, but only prioritized, which still results in the presence of unused resources. One of the disadvantages of the existing solutions is that it is not possible to add and subtract computer resources (CPU and memory) without the need to restart the server. A large number of existing solutions focus only on CPU or memory, but not on both. Due to all this, a decision was made to build a new model for an automated and improved utilization of the existing computing resources. The model will be verified by building an application that will also serve for validation on the Web server example where this problem was recognized. The research paradigm used in this research is the Design Science Research Methodology (DSRM), which has specific guidelines for evaluation and iteration within research projects. The methodology is based on the creation of a new artifact. In this case that is a new model which addresses these complex problems mentioned in this case. The Design Science Research Methodology consists of six sequential process steps, which are: identification of problems and motivations, a definition of goals, design, and development, presentation of solutions, evaluation and communication. Throughout these steps, numerous methods and techniques were used such as: comparison, evaluation/validation, content analysis, experiment, modelling techniques (UML), diagram techniques (causal relationship diagrams), structural analysis of processes (decomposition diagrams, data flow charts, and block diagram), programming (pseudocode and scripting languages (BASH and PHP), as well as many others. With regard to scientific contributions, this research has resulted with a new model for an automated and improved utilization of the existing computing resources without the need to restart the server, as well as in clearly defined cases and constraints regarding the new model’s application. The research has shown that the application of a new model enables a more efficient utilization of the existing computing resources (CPU and memory) without the need to restart the server. The research also provides recommendations for the implementation of the model in the selected programming language, and the process of evaluating the model in the experiments. In view of the social contribution, the whole solution is open source, which is also one of the main goals of this research. This results in an easier application of the solution and the repeatability of the testing to facilitate further improvement and research on this topic

    Building a model for automated and improved utilization of existing server resources

    Get PDF
    Zbog ubrzanog razvoja informacijske tehnologije su nastali složeni sustavi kao što su računarstvo u oblaku. Navedeni sustavi najčešće moraju imati visoku razinu dostupnosti podataka, odnosno moraju osigurati neprekidni rad poslovnih sustava. Kako bi se to postiglo prisutni su visoki kapitalni i operativni troškovi podatkovnih centara koji su neophodni za ovakvu vrstu usluga. Mnogobrojna istraživanja na ovu temu ukazuju kako su poslužitelji glavni uzročnici visokih troškova podatkovnih centara. Upravo se zbog toga njihovi resursi nastoje što učinkovitije iskoristiti. U istraživanju su navedene Web-farme kao primjer iz prakse koji potvrđuje da postoje sustavi čiji poslužitelji nedovoljno iskorištavaju svoje resurse, ali ih moraju imati kako bi osigurali visoku razinu dostupnosti sustava. Spomenuti visoki troškovi i nedovoljna iskoristivost postojećih računalnih resursa su glavna motivacija za ovo istraživanje. Nakon proučavanja dosadašnjih znanstvenih istraživanja, ali i rješenja iz prakse, utvrđeno je da ne postoji rješenje koje bi dovoljno učinkovito riješilo ovaj problem. U radu se predlaže novi model za automatiziranu i poboljšanu iskoristivost postojećih računalnih resursa bez potrebe za ponovnim pokretanjem poslužitelja koji rješava navedeni problem. Na temelju modela je napravljena aplikacija koja je validirana na primjeru Web-poslužitelja gdje je ovaj problem prepoznat. U radu se koristi istraživačka paradigma znanost o dizajnu (engl. Design Science Research Methodology, DSRM), koja se temelji na kreiranju novog artefakta što u ovom slučaju predstavlja novi model.Information technology is under constant innovation pressure to provide the highest level of data availability, i.e. the continuous functioning of operating systems. This is the very reason for an accelerated development of complex systems called cloud computing. One of the tasks of such solutions is to ensure high-level availability of complex systems and architecture. In order for such solutions to function properly, the high capital and operational costs of data centers are essential for this type of service. There are numerous studies which indicate that servers are the main cause of data centers’ high cost. As a result, the aim is to use servers, i.e. their resources more efficiently. This paper shows the examples from practice which confirm that there are systems whose servers insufficiently exploit their resources, but they must have them due to their importance. A concrete example of this problem are the Web Farms where, in order to achieve greater system availability, there is a greater amount of resources than is really needed, as confirmed by tools for measuring server loads. This approach allows the system to withstand sudden loads, which increases the level of system availability. The negative effect of such an approach is the increase in capital and operating costs due to a higher amount of computer resources. The mentioned high costs and inadequate utilization of the existing computer resources are at the same time the main motivation for this research. To solve this problem, it is necessary to have a system which would automatically allocate as much computer resources as the system needs, depending on its load and thereby taking into account its availability and consistency. During a detailed study of the current scientific research, as well as practical solutions, it has been found that there is no effective solution to this problem, which also served as an additional motivation for this research to be carried out. The existing solutions are lacking in that they are not dealing with how to use the existing resources more efficiently, but in adding new or migrating virtual servers to other physical servers in critical situations, which requires even larger numbers of computer resources. The second approach to solving this problem is process prioritization, i.e. that servers with the greatest need for resources are given the highest priority in the execution of the process. The disadvantage of this approach is that resources cannot be increased nor decreased, but only prioritized, which still results in the presence of unused resources. One of the disadvantages of the existing solutions is that it is not possible to add and subtract computer resources (CPU and memory) without the need to restart the server. A large number of existing solutions focus only on CPU or memory, but not on both. Due to all this, a decision was made to build a new model for an automated and improved utilization of the existing computing resources. The model will be verified by building an application that will also serve for validation on the Web server example where this problem was recognized. The research paradigm used in this research is the Design Science Research Methodology (DSRM), which has specific guidelines for evaluation and iteration within research projects. The methodology is based on the creation of a new artifact. In this case that is a new model which addresses these complex problems mentioned in this case. The Design Science Research Methodology consists of six sequential process steps, which are: identification of problems and motivations, a definition of goals, design, and development, presentation of solutions, evaluation and communication. Throughout these steps, numerous methods and techniques were used such as: comparison, evaluation/validation, content analysis, experiment, modelling techniques (UML), diagram techniques (causal relationship diagrams), structural analysis of processes (decomposition diagrams, data flow charts, and block diagram), programming (pseudocode and scripting languages (BASH and PHP), as well as many others. With regard to scientific contributions, this research has resulted with a new model for an automated and improved utilization of the existing computing resources without the need to restart the server, as well as in clearly defined cases and constraints regarding the new model’s application. The research has shown that the application of a new model enables a more efficient utilization of the existing computing resources (CPU and memory) without the need to restart the server. The research also provides recommendations for the implementation of the model in the selected programming language, and the process of evaluating the model in the experiments. In view of the social contribution, the whole solution is open source, which is also one of the main goals of this research. This results in an easier application of the solution and the repeatability of the testing to facilitate further improvement and research on this topic

    Building a model for automated and improved utilization of existing server resources

    Get PDF
    Zbog ubrzanog razvoja informacijske tehnologije su nastali složeni sustavi kao što su računarstvo u oblaku. Navedeni sustavi najčešće moraju imati visoku razinu dostupnosti podataka, odnosno moraju osigurati neprekidni rad poslovnih sustava. Kako bi se to postiglo prisutni su visoki kapitalni i operativni troškovi podatkovnih centara koji su neophodni za ovakvu vrstu usluga. Mnogobrojna istraživanja na ovu temu ukazuju kako su poslužitelji glavni uzročnici visokih troškova podatkovnih centara. Upravo se zbog toga njihovi resursi nastoje što učinkovitije iskoristiti. U istraživanju su navedene Web-farme kao primjer iz prakse koji potvrđuje da postoje sustavi čiji poslužitelji nedovoljno iskorištavaju svoje resurse, ali ih moraju imati kako bi osigurali visoku razinu dostupnosti sustava. Spomenuti visoki troškovi i nedovoljna iskoristivost postojećih računalnih resursa su glavna motivacija za ovo istraživanje. Nakon proučavanja dosadašnjih znanstvenih istraživanja, ali i rješenja iz prakse, utvrđeno je da ne postoji rješenje koje bi dovoljno učinkovito riješilo ovaj problem. U radu se predlaže novi model za automatiziranu i poboljšanu iskoristivost postojećih računalnih resursa bez potrebe za ponovnim pokretanjem poslužitelja koji rješava navedeni problem. Na temelju modela je napravljena aplikacija koja je validirana na primjeru Web-poslužitelja gdje je ovaj problem prepoznat. U radu se koristi istraživačka paradigma znanost o dizajnu (engl. Design Science Research Methodology, DSRM), koja se temelji na kreiranju novog artefakta što u ovom slučaju predstavlja novi model.Information technology is under constant innovation pressure to provide the highest level of data availability, i.e. the continuous functioning of operating systems. This is the very reason for an accelerated development of complex systems called cloud computing. One of the tasks of such solutions is to ensure high-level availability of complex systems and architecture. In order for such solutions to function properly, the high capital and operational costs of data centers are essential for this type of service. There are numerous studies which indicate that servers are the main cause of data centers’ high cost. As a result, the aim is to use servers, i.e. their resources more efficiently. This paper shows the examples from practice which confirm that there are systems whose servers insufficiently exploit their resources, but they must have them due to their importance. A concrete example of this problem are the Web Farms where, in order to achieve greater system availability, there is a greater amount of resources than is really needed, as confirmed by tools for measuring server loads. This approach allows the system to withstand sudden loads, which increases the level of system availability. The negative effect of such an approach is the increase in capital and operating costs due to a higher amount of computer resources. The mentioned high costs and inadequate utilization of the existing computer resources are at the same time the main motivation for this research. To solve this problem, it is necessary to have a system which would automatically allocate as much computer resources as the system needs, depending on its load and thereby taking into account its availability and consistency. During a detailed study of the current scientific research, as well as practical solutions, it has been found that there is no effective solution to this problem, which also served as an additional motivation for this research to be carried out. The existing solutions are lacking in that they are not dealing with how to use the existing resources more efficiently, but in adding new or migrating virtual servers to other physical servers in critical situations, which requires even larger numbers of computer resources. The second approach to solving this problem is process prioritization, i.e. that servers with the greatest need for resources are given the highest priority in the execution of the process. The disadvantage of this approach is that resources cannot be increased nor decreased, but only prioritized, which still results in the presence of unused resources. One of the disadvantages of the existing solutions is that it is not possible to add and subtract computer resources (CPU and memory) without the need to restart the server. A large number of existing solutions focus only on CPU or memory, but not on both. Due to all this, a decision was made to build a new model for an automated and improved utilization of the existing computing resources. The model will be verified by building an application that will also serve for validation on the Web server example where this problem was recognized. The research paradigm used in this research is the Design Science Research Methodology (DSRM), which has specific guidelines for evaluation and iteration within research projects. The methodology is based on the creation of a new artifact. In this case that is a new model which addresses these complex problems mentioned in this case. The Design Science Research Methodology consists of six sequential process steps, which are: identification of problems and motivations, a definition of goals, design, and development, presentation of solutions, evaluation and communication. Throughout these steps, numerous methods and techniques were used such as: comparison, evaluation/validation, content analysis, experiment, modelling techniques (UML), diagram techniques (causal relationship diagrams), structural analysis of processes (decomposition diagrams, data flow charts, and block diagram), programming (pseudocode and scripting languages (BASH and PHP), as well as many others. With regard to scientific contributions, this research has resulted with a new model for an automated and improved utilization of the existing computing resources without the need to restart the server, as well as in clearly defined cases and constraints regarding the new model’s application. The research has shown that the application of a new model enables a more efficient utilization of the existing computing resources (CPU and memory) without the need to restart the server. The research also provides recommendations for the implementation of the model in the selected programming language, and the process of evaluating the model in the experiments. In view of the social contribution, the whole solution is open source, which is also one of the main goals of this research. This results in an easier application of the solution and the repeatability of the testing to facilitate further improvement and research on this topic

    Enabling 5G Edge Native Applications

    Get PDF

    NASA Tech Briefs, November/December 1987

    Get PDF
    Topics include: NASA TU Services; New Product Ideas; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Fabrication Technology; Machinery; Mathematics and Information Sciences; Life Sciences

    Development of a simulation platform for the evaluation of PET neuroimaging protocols in epilepsy

    Get PDF
    Monte Carlo simulation of PET studies is a reference tool for the evaluation and standardization of PET protocols. However, current Monte Carlo software codes require a high degree of knowledge in physics, mathematics and programming languages, in addition to a high cost of time and computational resources. These drawbacks make their use difficult for a large part of the scientific community. In order to overcome these limitations, a free and an efficient web-based platform was designed, implemented and validated for the simulation of realistic brain PET studies, and specifically employed for the generation of a wellvalidated large database of brain FDG-PET studies of patients with refractory epilepsy

    On Improving The Performance And Resource Utilization of Consolidated Virtual Machines: Measurement, Modeling, Analysis, and Prediction

    Get PDF
    This dissertation addresses the performance related issues of consolidated \emph{Virtual Machines} (VMs). \emph{Virtualization} is an important technology for the \emph{Cloud} and data centers. Essential features of a data center like the fault tolerance, high-availability, and \emph{pay-as-you-go} model of services are implemented with the help of VMs. Cloud had become one of the significant innovations over the past decade. Research has been going on the deployment of newer and diverse set of applications like the \emph{High-Performance Computing} (HPC), and parallel applications on the Cloud. The primary method to increase the server resource utilization is VM consolidation, running as many VMs as possible on a server is the key to improving the resource utilization. On the other hand, consolidating too many VMs on a server can degrade the performance of all VMs. Therefore, it is necessary to measure, analyze and find ways to predict the performance variation of consolidated VMs. This dissertation investigates the causes of performance variation of consolidated VMs; the relationship between the resource contention and consolidation performance, and ways to predict the performance variation. Experiments have been conducted with real virtualized servers without using any simulation. All the results presented here are real system data. In this dissertation, a methodology is introduced to do the experiments with a large number of tasks and VMs; it is called the \emph{Incremental Consolidation Benchmarking Method} (ICBM). The experiments have been done with different types of resource-intensive tasks, parallel workflow, and VMs. Furthermore, to experiment with a large number of VMs and collect the data; a scheduling framework is also designed and implemented. Experimental results are presented to demonstrate the efficiency of the ICBM and framework

    Raman spectroscopy as a tool for studying bacterial cell compounds

    Get PDF
    Raman spectroscopy is an attractive tool for microbial analysis because of the very small sample volumes needed for analysis, the minor sample preparation and the speed of analysis. Moreover, Raman spectra provide information of all Raman active molecules in the bacterial cell. This technique has already proven to be successful for bacterial identification at the species and strain level. For this purpose, Raman spectroscopy is used as a fingerprint technique. However, these spectra also contain valid information about biochemical composition of the cells. Because bacterial Raman spectra are the sum of signals from all Raman active cell compounds, they are complex. Although in literature some bands have been assigned to (specific or groups of) biomolecules, only very few studies are published where Raman spectroscopy is used to study specific bacterial cell compounds. Therefore, the aim of this work was to develop methods for extracting information from these complex bacterial spectra
    corecore