8,204 research outputs found

    COMPARATIVE ANALYSIS OF SOFTWARE EFFORT ESTIMATION USING DATA MINING TECHNIQUE AND FEATURE SELECTION

    Get PDF
    Software development involves several interrelated factors that influence development efforts and productivity. Improving the estimation techniques available to project managers will facilitate more effective time and budget control in software development. Software Effort Estimation or software cost/effort estimation can help a software development company to overcome difficulties experienced in estimating software development efforts. This study aims to compare the Machine Learning method of Linear Regression (LR), Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Decision Tree Random Forest (DTRF) to calculate estimated cost/effort software. Then these five approaches will be tested on a dataset of software development projects as many as 10 dataset projects. So that it can produce new knowledge about what machine learning and non-machine learning methods are the most accurate for estimating software business. As well as knowing between the selection between using Particle Swarm Optimization (PSO) for attributes selection and without PSO, which one can increase the accuracy for software business estimation. The data mining algorithm used to calculate the most optimal software effort estimate is the Linear Regression algorithm with an average RMSE value of 1603,024 for the 10 datasets tested. Then using the PSO feature selection can increase the accuracy or reduce the RMSE average value to 1552,999. The result indicates that, compared with the original regression linear model, the accuracy or error rate of software effort estimation has increased by 3.12% by applying PSO feature selectio

    Effort estimation for object-oriented system using artificial intelligence techniques

    Get PDF
    Software effort estimation is a vital task in software engineering. The importance of effort estimation becomes critical during early stage of the software life cycle when the details of the software have not been revealed yet. The effort involved in developing a software product plays an important role in determining the success or failure. With the proliferation of software projects and the heterogeneity in their genre, there is a need for efficient effort estimation techniques to enable the project managers to perform proper planning of the Software Life Cycle activates. In the context of developing software using object-oriented methodologies, traditional methods and metrics were extended to help managers in effort estimation activity. There are basically some points approach, which are available for software effort estimation such as Function Point, Use Case Point, Class Point, Object Point, etc. In this thesis, the main goal is to estimate the effort of various software projects using Class Point Approach. The parameters are optimized using various artificial intelligence (AI) techniques such as Multi-Layer Perceptron (MLP), K-Nearest Neighbor Regression (KNN) and Radial Basis Function Network(RBFN), fuzzy logic with various clustering algorithms such as the Fuzzy C-means (FCM) algorithm, K-means clustering algorithm and Subtractive Clustering (SC) algorithm, such as to achieve better accuracy. Furthermore, a comparative analysis of software effort estimation using these various AI techniques has been provided. By estimating the software projects accurately, we can have software with acceptable quality within budget and on planned schedules

    Prediction of River Discharge by Using Gaussian Basis Function

    Get PDF
    For design of water resources engineering related project such as hydraulic structures like dam, barrage and weirs river discharge data is vital. However, prediction of river discharge is complicated by variations in geometry and boundary roughness. The conventional method of estimation of river discharge tends to be inaccurate because river discharge is nonlinear but the method is linear. Therefore, an alternative method to overcome problem to predict river discharge is required. Soft computing technique such as artificial neural network (ANN) was able to predict nonlinear parameter such as river discharge. In this study, prediction of river discharge in Pari River is predicted using soft computing technique, specifically gaussian basis function. Water level raw data from year 2011 to 2012 is used as input. The data divided into two section, training dataset and testing dataset. From 314 data, 200 are allocated as training data and the remaining 100 are used as testing data. After that, the data will be run by using Matlab software. Three input variables used in this study were current water level, 1-antecendent water level, and 2-antecendent water level. 19 numbers of hidden neurons with spread value of 0.69106 was the best choice which creates the best result for model architecture after numbers of trial. The output variable was river discharge. Performance evaluation measures such as root mean square error, mean absolute error, correlation of efficiency (CE) and coefficient of determination (R2) was used to indicate the overall performance of the selected network. R2 for training dataset was 0.983 which showed predicted discharge is highly correlated with observed discharge value. However, testing stage performance is decline from training stage as R2 obtained was 0.775 consequently presence of outliers have affect scattering of whole data of testing and resulted in less accuracy as the R2 obtained much lower compared to training dataset. This happened because less number of input loaded into testing than training. RMSE and MSE recorded for training much lower than testing indicated that the better the performance of the model since the error is lesser. The comparison of with other types of neural network showed that Gaussian basis function is recommended to be used for river discharge prediction in Pari river

    Representations of molecules and materials for interpolation of quantum-mechanical simulations via machine learning

    No full text
    Computational study of molecules and materials from first principles is a cornerstone of physics, chemistry and materials science, but limited by the cost of accurate and precise simulations. In settings involving many simulations, machine learning can reduce these costs, sometimes by orders of magnitude, by interpolating between reference simulations. This requires representations that describe any molecule or material and support interpolation. We review, discuss and benchmark state-of-the-art representations and relations between them, including smooth overlap of atomic positions, many-body tensor representation, and symmetry functions. For this, we use a unified mathematical framework based on many-body functions, group averaging and tensor products, and compare energy predictions for organic molecules, binary alloys and Al-Ga-In sesquioxides in numerical experiments controlled for data distribution, regression method and hyper-parameter optimization

    OPTIMIZATION TECHNIQUE FOR SOFTWARE COST ESTIMATION USING NEURAL NETWORK

    Get PDF
    Last few decade software accomplishment admiration models developed, authentic estimates of the software activity beneath development is still unachievable goal. Recently advisers are alive on the development of new models and the advance of the absolute ones application bogus intelligence techniques. Designing of ANN (Artificial Neural Network) to archetypal a circuitous set of accord amid the abased capricious (effort) and the absolute variables (cost drivers) makes an apparatus for estimation. This cardboard presents an achievement assay of Multi ANNs in accomplishment estimation. We accept apish Back propagation ANN created by MATLAB Neural Network Apparatus application NASA dataset

    Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models

    Get PDF
    In this research, we propose a novel model to predict software size and effort from use case diagrams. The main advantage of our model is that it can be used in the early stages of the software life cycle, and that can help project managers efficiently conduct cost estimation early, thus avoiding project overestimation and late delivery among other benefits. Software size, productivity, complexity and requirements stability are the inputs of the model. The model is composed of six independent sub-models which include non-linear regression, linear regression with a logarithmic transformation, Radial Basis Function Neural Network (RBFNN), Multilayer Perceptron Neural Network (MLP), General Regression Neural Network (GRNN) and a Treeboost model. Several experiments were conducted to train and test the model based on the size of the training and testing data points. The neural network models were evaluated against regression models as well as two other models that conduct software estimation from use case diagrams. Results show that our model outperforms other relevant models based on five evaluation criteria. While the performance of each of the six sub-models varies based on the size of the project dataset used for evaluation, it was concluded that the non-linear regression model outperforms the linear regression model. As well, the GRNN model exceeds other neural network models. Furthermore, experiments demonstrated that the Treeboost model can be efficiently used to predict software effort
    corecore