4 research outputs found

    Integrated Optimization of IT Service Performance and Availability Using Performability Prediction Models

    Get PDF
    Optimizing the performance and availability of an IT service in the design stage are typically considered as independent tasks. However, since both aspects are related to one another, these activities could be combined by applying performability models, in which both the performance and the availability of a service can be more accurately predicted. In this paper, a design optimization problem for IT services is defined and applied in two scenarios, one of which considers a mechanism in which redundant components can be used both for failover as well as handling overload situations. Results show that including such aspects affecting both availability and performance in prediction models can lead to more cost-effective service designs. Thus, performability prediction models are one opportunity to combine performance and availability management for IT services

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications

    Medidas de productividad en los proyectos de desarrollo de software: una aproximación por puestos de trabajo

    Get PDF
    La productividad es una medida, principalmente económica, creada a finales del siglo XVIII. Desde entonces, numerosas modificaciones se han realizado sobre la definición inicial y se han incorporando a diversas áreas de conocimiento. Dentro de la Ingeniería del Software (IS), la productividad comenzó a ser objeto de estudio a finales de los años 70, casi de forma paralela a la concepción de la misma y al inicio del estudio de conceptos relacionados, tales como la estimación de esfuerzo. La medición de la productividad en IS ha sido ampliamente analizada a nivel de proyecto y organización, sin embargo a nivel de puesto de trabajo no ha sido tan investigada. En estos escasos estudios, las medidas utilizadas suelen ser las mismas medidas que las empleadas en niveles superiores de medición. En concreto, las medidas empleadas suelen ser ratios entre una medida de tamaño de producto (p. ej., líneas de código o puntos función) y una medida de esfuerzo o tiempo (p. ej., horas-hombre u horas). Este tipo de medidas son muy específicas y no reflejan la realidad del trabajo desempeñado en todo el proceso de desarrollo, ya que no tienen en cuenta las características inherentes a cada puesto de trabajo. Así pues, la eficacia de estas medidas, en este nivel de medición, parece estar en entredicho y la realización de estudios que aporten nuevas medidas de productividad en IS a nivel de puesto de trabajo cobra sentido. En la presente tesis doctoral se ha analizado la situación actual de la medición de la productividad en IS a nivel de puesto de trabajo con el objetivo de crear nuevas medidas. Para conseguir este objetivo se ha realizado un estudio del estado de la cuestión utilizando una metodología clásica de revisión de referencias junto con una revisión sistemática de la literatura. Una vez analizado el estado de la cuestión se ha planteado un conjunto de hipótesis relacionadas con la construcción de nuevas medidas de productividad: Hipótesis 1. En los puestos de trabajo involucrados en la ejecución de proyectos de desarrollo de software se emplean otras entradas, además del tiempo y el esfuerzo. Hipótesis 2. Las entradas utilizadas son distintas para cada puesto de trabajo involucrado en la ejecución de proyectos de desarrollo de software. Hipótesis 3. En los puestos de trabajo involucrados en la ejecución de proyectos de desarrollo de software se producen otras salidas, además de líneas de código y funcionalidad. Hipótesis 4. Las salidas producidas son distintas para cada puesto de trabajo involucrado en la ejecución de proyectos de desarrollo de software. Hipótesis 5. Las medidas de productividad más utilizadas a nivel de puesto de trabajo en los proyectos de desarrollo de software tienen una eficacia limitada para medir la productividad real de los trabajadores. Hipótesis 6. Es posible medir de forma más eficaz la productividad de los puestos de trabajo en los proyectos de desarrollo de software con nuevas medidas que combinen varios elementos: entradas, salidas y factores. Tras el análisis del estado de la cuestión, se ha realizado una fase de investigación cualitativa mediante el empleo de entrevistas a trabajadores de IS y un posterior análisis de contenido, con el fin de obtener información suficiente para: (1) contrastar las cuatro primeras hipótesis con información cualitativa, y (2) construir el medio de recogida de información para la siguiente fase de la investigación. Con respecto al primer objetivo, ha sido posible contrastar dos hipótesis (H1 y H3). En la segunda fase, mediante una metodología cuantitativa, se han contrastado las cuatro primeras hipótesis planteadas. Para la recogida de información se ha utilizado un formulario construido a partir de los resultados de la fase cualitativa. Los resultados de esta fase indican que en los puestos de trabajo analizados (programador, analista, consultor, y jefe de proyecto): se utilizan otros recursos además del tiempo, se producen otras salidas además del código fuente y la funcionalidad entregada al cliente. Además, se han encontrado diferencias en el grado de uso de las entradas y en la producción de las salidas, por lo que el uso de una misma medida de productividad para todos los puestos bajo estudio es, en principio, ilógico. Para contrastar las dos, y últimas, hipótesis se han construido nuevas medidas de productividad, teniendo en cuenta los resultados previos. En concreto, se ha utilizado Data Envelopment Analysis (DEA) como metodología personalizable para medir la productividad; y se han realizado cuatro casos de estudio empleando dicha metodología. Los resultados tras los casos de estudio indican que mediante DEA es posible medir la productividad de los puestos de trabajo vinculados con los proyectos de desarrollo y mantenimiento de software de forma más eficaz que con las medidas más utilizadas. Además, esta metodología permite conocer los puntos de mejora para que los trabajadores menos productivos aumenten su productividad, lo que supone una gran ventaja frente a otras medidas de productividad si el objetivo de medir, como es lógico suponer, es mejorar la productividad, y no simplemente evaluarla. Así pues, se contrastan las dos últimas hipótesis y se insta, entre otras futuras líneas de investigación, a continuar con nuevos estudios que comparen el uso de DEA con otras medidas de productividad. Finalmente, se concluye que la medición de la productividad en los puestos de trabajo vinculados con los proyectos de desarrollo y mantenimiento de software continua siendo un reto. Para reducir la dificultad de éste, la presente tesis doctoral arroja luz aportando un marco de trabajo para analizar y plantear nuevas medidas de productividad, tanto en estos puestos de trabajo como en otros. ------------------------------Productivity is mainly an economic measure, created in the late eighteenth century. Since then, many changes have been made on its initial definition and have been incorporated into various areas of knowledge. Within Software Engineering (SE), productivity began to be studied in the late '70s. These efforts ran parallel to SE developments, such as effort estimation. Measuring productivity in SE has been extensively analyzed at the project and organization level; however job level has not been investigated with the same depth. In these few studies, the measures used are often the same ones than those used in higher levels of measurement. Specifically, the measures employed are usually ratios between a measure of product size (e.g., lines of code or function points) and a measure of effort or time (e.g., man-hours or hours). Such measures do not reflect the reality of the work performed throughout the development process because they do not take into account the inherent characteristics of each job. Thus, the effectiveness of these measures, in this measurement level, seems to be in question and studies that provide new measures of productivity at job level make sense. In this thesis we have analyzed the current state of productivity measurement at job level within SE with the goal of creating new measures. In order to achieve this objective a study of the state of the art has been carried out with a classical methodology along with a systematic review of the literature. After analyzing the state of the art, a number of hypotheses related to the construction of new productivity measures have been stated: Hypothesis 1. In the jobs involved in the implementation of software development projects other inputs are used in addition to time and effort. Hypothesis 2. The inputs used are different for every job involved in software development projects. Hypothesis 3. In the jobs involved in the implementation of software development projects other outputs are produced in addition to source code lines and functionality. Hypothesis 4. The outputs produced are different for every job involved in software development projects. Hypothesis 5. The most used productivity measures at job level in software development projects have limited effectiveness for measuring real productivity of workers. Hypothesis 6. It is possible to measure more effectively the productivity of jobs in software development projects with new measures that combine several elements: inputs, outputs and factors. After analyzing the state of the art, a qualitative phase has been performed using interviews with SE workers and a subsequent content analysis of them in order to obtain pertinent information: (1) to test the first four hypotheses with qualitative information, and (2) to build the information gathering instrument for the next phase of research. Regarding the first objective, it has been possible to test two hypotheses (H1 and H3). In the second phase, using a quantitative method, the first four hypotheses have been contrasted and accepted. For the information gathering a form constructed from the results of the qualitative phase has been used. The results of this phase indicate that the analyzed job positions (programmer, analyst, consultant, and project manager): use other resources in addition to time, and deliver other outputs in addition to source code and functionality delivered to the client. Also some differences in the degree of use of inputs and production of outputs have been found. Therefore, the use of the same measure of productivity for all positions under study is, in principle, illogical. To contrast the last two hypotheses new productivity measures have been built taking into account the previous results. Specifically, a customizable methodology for measuring productivity such as Data Envelopment Analysis (DEA) was used in four case studies. The results after these studies indicate that using DEA is a mean to measure the productivity of job level for job positions related to the development and maintenance of software projects in a more effectively way. Furthermore, this methodology allows knowing the points for improvement for the least productive workers in order to increase their productivity. This knowledge is a great advantage over other productivity measures if the goal of measuring, as is logical to assume, is to improve productivity, not simply to evaluate it. So the last two hypotheses has been supported. Consequently we call, among other future research, to continue with further studies comparing the use of DEA with other measures of productivity. Finally, it is concluded that the measurement of productivity in job positions related with software development and maintenance projects remains a challenge. To reduce this difficulty, this thesis sheds some light on the topic by providing a framework to analyze and propose new measures of productivity for SE job roles.Presidente: María Belén Ruiz Mezcua; Vocal: Rafael Valencia García; Secretario: Edmundo Tovar Car

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore