2,622 research outputs found

    Lean Forecasting In Software Projects

    Get PDF
    Quando se desenvolve um projeto de software, é reconhecível que estimativas precisas do esforço envolvido no desenvolvimento são uma parte importante na gestão bem-sucedida do projeto. Embora este processo seja tão importante, desenvolvedores e especialistas não conseguem normalmente estimar precisamente o esforço, tempo e custo que o projeto a ser desenvolvido terá. Isto é inerente à incerteza subjacente à sua atividade. Depois da primeira estimativa do esforço ser feita, o projeto pode, com alguma probabilidade, necessitar de se adaptar a circunstâncias em evolução, o que pode levar a mudanças nas características do projeto, e subsequentemente levar a que os gestores ponham mais pressão nos desenvolvedores para que sejam respeitados os prazos de entrega. No fim, o desenvolvimento do projeto irá, provavelmente, atrasar-se e estes atrasos não só afetam a equipa de desenvolvimento, mas também outras partes da empresa, como os departamentos responsaveis pelos funcionários e pelo marketing. Isto pode, em algumas situações, levar a que a empresa perca tempo e muitas vezes a confiança do cliente interessado no projeto. Mesmo que a estimativa seja precisa o suficiente para que as datas de entrega sejam respeitadas, métodos que dependem das estimativas de humanos consomem, normalmente, muito tempo, o que pode representar um problema quando equipas gastam tempo precioso a fazer estimativas. De maneira a mitigar estes problemas, iremos procurar identificar as motivações e forças em jogo no processo de fazer estimativas precisas e determinar que métodos de previsão alcançam os resultados mais precisos com alguma generalização, de modo a satisfazer a variedade de projetos de software existente. Vamos nos focar nos métodos de previsão devido à sua automaticidade, que irá ajudar a reduzir o tempo que as equipas gastam em estimações, mantendo a precisão dos resultados. Este método deve, também, ser fácil de perceber, implementar e usar, logo o número de dados que deve receber e a sua dificuldade de obter deve ser reduzida. As previsões do método devem conter um certo nível de ambiguidade, de modo a representar melhor o problema. Para a fase de validação do método, uma ferramenta baseada no método irá ser desenvolvida, testada em termos de eficácia e precisão contra outros métodos existentes, e irá ser integrada com ferramentas de gest�\xA3o de desenvolvimento de software, de modo a validar a sua usabilidade em projetos reais durante a fase de desenvolvimento destes. Assim, o objetivo principal desta dissertação é o de ajudar a reduzir o tempo perdido em estimações, mantendo ou até melhorando a precisão das previsões feitas e mantendo a facilidade de percepção e de uso para os desenvolvedores e equipas que utilizem este método.When developing a software project, it's recognisable that accurate estimations of development effort play an important part in the successful management of the project. Although this process is so important, developers and experts can't usually estimate accurately the effort, time and cost of a project to be developed. This is inherit to the uncertainty that underlies their activity. After the first estimation of the effort, the project may, with some likelihood, need to adapt to evolving circumstances, which may lead to changes in its scope, and consequently lead to managers putting pressure in the developers to respect delivery dates. In the end, the project's development will, probably, get delayed and this delays not only affect the development team but also other parts of the company, such as staffing or marketing. This could, in some situations, lead to the company losing time and in many times the trust of the stakeholder. Even if the estimate is accurate enough so that delivery dates are respected, methods that relay on Human estimation are, often, time consuming, what can represent a problem when teams waste precious time in making estimations. In order to mitigate this problems, we will seek to identify the motivations and forces playing in a accurate estimate and determine which forecast method could provide the bet- ter accuracy with some generalization, in order to satisfy the existing variety of software projects. We will focus on forecast methods because of their automatability, that will help reduce the time teams waste on estimations, still delivering accurate results. This method must also be easy to understand, implement and use, so the number of inputs required and the difficulty to collect this inputs should be low. The output of the method should contain a certain level of uncertainty, in order to better represent the problem. In order to validate this method, a tool based on it will be developed, tested in terms of effective- ness and accuracy against other existing methods, and it will be integrated with software development management tools to validate it's ability to be used in real projects during their development phase. Following this lines, the main goal of this dissertation is to help reduce the time wasted in estimations, while maintaining or even increase the accuracy of the prediction made and maintaining the understandability and usability easy for the teams and developers using it

    Modelling and planning of manpower supply and demand

    Get PDF
    Master'sMASTER OF ENGINEERIN

    A possibilistic approach to latent structure analysis for symmetric fuzzy data.

    Get PDF
    In many situations the available amount of data is huge and can be intractable. When the data set is single valued, latent structure models are recognized techniques, which provide a useful compression of the information. This is done by considering a regression model between observed and unobserved (latent) fuzzy variables. In this paper, an extension of latent structure analysis to deal with fuzzy data is proposed. Our extension follows the possibilistic approach, widely used both in the cluster and regression frameworks. In this case, the possibilistic approach involves the formulation of a latent structure analysis for fuzzy data by optimization. Specifically, a non-linear programming problem in which the fuzziness of the model is minimized is introduced. In order to show how our model works, the results of two applications are given.Latent structure analysis, symmetric fuzzy data set, possibilistic approach.

    Composite Monte Carlo Decision Making under High Uncertainty of Novel Coronavirus Epidemic Using Hybridized Deep Learning and Fuzzy Rule Induction

    Full text link
    In the advent of the novel coronavirus epidemic since December 2019, governments and authorities have been struggling to make critical decisions under high uncertainty at their best efforts. Composite Monte-Carlo (CMC) simulation is a forecasting method which extrapolates available data which are broken down from multiple correlated/casual micro-data sources into many possible future outcomes by drawing random samples from some probability distributions. For instance, the overall trend and propagation of the infested cases in China are influenced by the temporal-spatial data of the nearby cities around the Wuhan city (where the virus is originated from), in terms of the population density, travel mobility, medical resources such as hospital beds and the timeliness of quarantine control in each city etc. Hence a CMC is reliable only up to the closeness of the underlying statistical distribution of a CMC, that is supposed to represent the behaviour of the future events, and the correctness of the composite data relationships. In this paper, a case study of using CMC that is enhanced by deep learning network and fuzzy rule induction for gaining better stochastic insights about the epidemic development is experimented. Instead of applying simplistic and uniform assumptions for a MC which is a common practice, a deep learning-based CMC is used in conjunction of fuzzy rule induction techniques. As a result, decision makers are benefited from a better fitted MC outputs complemented by min-max rules that foretell about the extreme ranges of future possibilities with respect to the epidemic.Comment: 19 page

    Composite Monte Carlo decision making under high uncertainty of novel coronavirus epidemic using hybridized deep learning and fuzzy rule induction

    Get PDF
    In the advent of the novel coronavirus epidemic since December 2019, governments and authorities have been struggling to make critical decisions under high uncertainty at their best efforts. In computer science, this represents a typical problem of machine learning over incomplete or limited data in early epidemic Composite Monte-Carlo (CMC) simulation is a forecasting method which extrapolates available data which are broken down from multiple correlated/casual micro-data sources into many possible future outcomes by drawing random samples from some probability distributions. For instance, the overall trend and propagation of the infested cases in China are influenced by the temporal–spatial data of the nearby cities around the Wuhan city (where the virus is originated from), in terms of the population density, travel mobility, medical resources such as hospital beds and the timeliness of quarantine control in each city etc. Hence a CMC is reliable only up to the closeness of the underlying statistical distribution of a CMC, that is supposed to represent the behaviour of the future events, and the correctness of the composite data relationships. In this paper, a case study of using CMC that is enhanced by deep learning network and fuzzy rule induction for gaining better stochastic insights about the epidemic development is experimented. Instead of applying simplistic and uniform assumptions for a MC which is a common practice, a deep learning-based CMC is used in conjunction of fuzzy rule induction techniques. As a result, decision makers are benefited from a better fitted MC outputs complemented by min–max rules that foretell about the extreme ranges of future possibilities with respect to the epidemic.University of Macau MYRG2016-00069-FSTFDCT Macau FDCT/126/2014/A32018 Guangzhou Science and Technology Innovation and Development of Special Funds201907010001EF003/FST-FSJ/2019/GSTI

    Finding the optimal combination of power plants alternatives: a multi response Taguchi-neural network using TOPSIS and fuzzy best-worst method

    Get PDF
    With increasing growth of electricity consumption in developed and developing countries, the necessity of constructing and developing of power plants is inevitable. There are two main resources for electricity generation includes fossil and renewable energies which have some different characteristics such as manufacturing technology, environmental issues, accessibility and etc. In developing plans, it is important to consider and address the policy makers’ indicators such as environmental, social, economic and technical criteria. In this paper, an integrated multi response Taguchi-neural network-fuzzy best-worst method (FBWM) -TOPSIS approach is applied to find an optimal level of five different power plants including: gas, steam, combined cycle, wind and hydroelectric. Taguchi method is used to design combinations and calculate some of the signal to noise (S/N) ratios. Then, neural network is applied to estimate the rest of S/N ratios. Finally, FBWM and TOPSIS methods are used for weighing sub-indicators and selecting the best combination, respectively. To illustrate the usefulness of the proposed approach, a case study on the development of power plants in Iran is considered and the results are discussed. According to the results, in general, small size power plants for fossil resources are preferable. In contrast, medium and larger size power plants for renewable resources are preferable

    INTEGRATION OF FUZZY LOGIC METHOD AND COCOMO II ALGORITHM TO IMPROVE PREDICTION TIMELINESS AND SOFTWARE DEVELOPMENT COST

    Get PDF
    This study discusses improving the prediction of timeliness and cost of software development using the Constructive Cost Model II (COCOMO II) method and the application of Fuzzy Logic. And aims to obtain accurate time and cost prediction estimates on software development projects to obtain maximum cost results for a software development project. This study utilizes an adaptive fuzzy logic model to improve the timeliness of software development and cost estimates. Using the advantages of fuzzy set logic and producing accurate software attributes to increase the prediction of the time and price of software development. The fuzzy model uses the Two-D Gaussian Membership Function (2-D GMF) to make the software attributes more detailed in terms of the range of values. In COCOMO I, NASA98 data set; and four data projects from software companies in Indonesia were used to evaluate the proposed Fuzzy Logic COCOMO II, commonly known as FL-COCOMO II. Using the Mean of Magnitude of Relative Error (MMRE) and the Pred evaluation technique, the results showed that FL-COCOMO II produced less MMRE than COCOMO I, and the Pred value (25%) in Fuzzy-COCOMO II was higher than COCOMO I. In addition, FL-COCOMO II showed an 8.03% increase in prediction accuracy using MMRE compared to the original COCOMO. Using the advantages of Fuzzy Logic, such as accurate predictions, adaptation, and understanding can improve the accuracy of the timeliness and cost estimates of the software

    Prediction of Suspended Sediment Concentration in Kinta River Using Soft Computing Techniques

    Get PDF
    The prediction of suspended sediment concentration in hyperconcentrated rivers is crucial in modeling and designing hydraulic structures such as dams and water intake inlets. In this study, suspended sediment concentration in Kinta River is predicted using soft computing technique, specifically radial basis function. Suspended sediment concentration and stream discharge from the year of 1992 to 1995 and data from the year of 2009 are used as input. The data are divided into three sections, namely training, testing and validation. 824 data are allocated for training, 313 data are allocated for testing purpose and 342 data are allocated for validation purpose. All data are normalized to reduce error. The determination of input neuron is based on correlation analysis. The number of hidden neurons is determined by the application of trial and error method. As for the output, only one output neuron is required which is the predicted value of suspended sediment concentration. The results obtained from the radial basis function model are evaluated to identify the performance of radial basis function model. Performance of the prediction is measured using statistical parameters namely root mean square error (RMSE), mean square error (MSE), Coefficient of efficiency (CE) and coefficient of determination ( ). Radial basis function model performed well producing the value of (0.9856 & 0.9884) for training and testing stages, respectively. However the performance of RBF model in the prediction of suspended sediment concentration for the year 2009 is poor, with the value of of 0.6934. Recommendations to improve the prediction accuracy are by incorporating a wider data span and by including other hydrology parameters that may impact the changes in the value of suspended sediment concentratio

    Conservative and aggressive rough SVR modeling

    Get PDF
    AbstractSupport vector regression provides an alternative to the neural networks in modeling non-linear real-world patterns. Rough values, with a lower and upper bound, are needed whenever the variables under consideration cannot be represented by a single value. This paper describes two approaches for the modeling of rough values with support vector regression (SVR). One approach, by attempting to ensure that the predicted high value is not greater than the upper bound and that the predicted low value is not less than the lower bound, is conservative in nature. On the contrary, we also propose an aggressive approach seeking a predicted high which is not less than the upper bound and a predicted low which is not greater than the lower bound. The proposal is shown to use ϵ-insensitivity to provide a more flexible version of lower and upper possibilistic regression models. The usefulness of our work is realized by modeling the rough pattern of a stock market index, and can be taken advantage of by conservative and aggressive traders
    corecore