6,683 research outputs found
Is One Hyperparameter Optimizer Enough?
Hyperparameter tuning is the black art of automatically finding a good
combination of control parameters for a data miner. While widely applied in
empirical Software Engineering, there has not been much discussion on which
hyperparameter tuner is best for software analytics. To address this gap in the
literature, this paper applied a range of hyperparameter optimizers (grid
search, random search, differential evolution, and Bayesian optimization) to
defect prediction problem. Surprisingly, no hyperparameter optimizer was
observed to be `best' and, for one of the two evaluation measures studied here
(F-measure), hyperparameter optimization, in 50\% cases, was no better than
using default configurations.
We conclude that hyperparameter optimization is more nuanced than previously
believed. While such optimization can certainly lead to large improvements in
the performance of classifiers used in software analytics, it remains to be
seen which specific optimizers should be applied to a new dataset.Comment: 7 pages, 2 columns, accepted for SWAN1
Numerical Simulation and Design of Ensemble Learning Based Improved Software Development Effort Estimation System
This research paper proposes a novel approach to improving software development effort estimation by integrating ensemble learning algorithms with numerical simulation techniques. The objective of this study is to design an ensemble learning-based software development effort estimation system that leverages the strengths of multiple algorithms to enhance accuracy and reliability. The proposed system combines the power of ensemble learning, which involves aggregating predictions from multiple models, with numerical simulation techniques that enable the modelling and analysis of complex software development processes. A diverse set of software development projects is collected, encompassing various domains, sizes, and complexities. Ensemble learning algorithms such as Random Forest, Gradient Boosting, Bagging, and AdaBoost are selected for their ability to capture different aspects of the data and produce robust predictions. The proposed system architecture is presented, illustrating the flow of data and components. A model training and evaluation pipeline is developed, enabling the integration of ensemble learning and numerical simulation modules. The system combines the predictions generated by the ensemble models with the simulation results to produce more accurate and reliable effort estimates. The experimental setup involves a comprehensive evaluation of the proposed system. A real-world dataset comprising historical project data is utilized, and various performance metrics, including Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE), are employed to assess the effectiveness of the system. The results and analysis demonstrate that the ensemble learning-based effort estimation system outperforms traditional techniques, showcasing its potential to enhance project planning and resource allocation
Data Mining and Machine Learning for Software Engineering
Software engineering is one of the most utilizable research areas for data mining. Developers have attempted to improve software quality by mining and analyzing software data. In any phase of software development life cycle (SDLC), while huge amount of data is produced, some design, security, or software problems may occur. In the early phases of software development, analyzing software data helps to handle these problems and lead to more accurate and timely delivery of software projects. Various data mining and machine learning studies have been conducted to deal with software engineering tasks such as defect prediction, effort estimation, etc. This study shows the open issues and presents related solutions and recommendations in software engineering, applying data mining and machine learning techniques
Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values
This work is motivated by the needs of predictive analytics on healthcare
data as represented by Electronic Medical Records. Such data is invariably
problematic: noisy, with missing entries, with imbalance in classes of
interests, leading to serious bias in predictive modeling. Since standard data
mining methods often produce poor performance measures, we argue for
development of specialized techniques of data-preprocessing and classification.
In this paper, we propose a new method to simultaneously classify large
datasets and reduce the effects of missing values. It is based on a multilevel
framework of the cost-sensitive SVM and the expected maximization imputation
method for missing values, which relies on iterated regression analyses. We
compare classification results of multilevel SVM-based algorithms on public
benchmark datasets with imbalanced classes and missing values as well as real
data in health applications, and show that our multilevel SVM-based method
produces fast, and more accurate and robust classification results.Comment: arXiv admin note: substantial text overlap with arXiv:1503.0625
A Study of Text Mining Framework for Automated Classification of Software Requirements in Enterprise Systems
abstract: Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of classification of software requirements statements into categories easily comprehensible for developers for faster development and delivery, which till now was mostly done manually by software engineers - indeed a tedious job. However, most of the research was focused on classification of Non-functional requirements pertaining to intangible features such as security, reliability, quality and so on. It is indeed a challenging task to automatically classify functional requirements, those pertaining to how the system will function, especially those belonging to different and large enterprise systems. This requires exploitation of text mining capabilities. This thesis aims to investigate results of text classification applied on functional software requirements by creating a framework in R and making use of algorithms and techniques like k-nearest neighbors, support vector machine, and many others like boosting, bagging, maximum entropy, neural networks and random forests in an ensemble approach. The study was conducted by collecting and visualizing relevant enterprise data manually classified previously and subsequently used for training the model. Key components for training included frequency of terms in the documents and the level of cleanliness of data. The model was applied on test data and validated for analysis, by studying and comparing parameters like precision, recall and accuracy.Dissertation/ThesisMasters Thesis Engineering 201
- …