41 research outputs found

    Prediksi Reliabilitas Perangkat Lunak Menggunakan Support Vector Regression dan Model Mining

    Get PDF
    Reliabilitas perangkat lunak didefinisikan sebagai probabilitas operasi perangkat lunak yang bebas dari kegagalan (failure) dalam sebuah periode waktu tertentu. Pemodelan reliabilitas perangkat lunak ini dapat dilakukan salah satunya dengan memanfaatkan data kegagalan perangkat lunak untuk melakukan prediksi kegagalan di masa datang. Salah satu arsitektur yang dipakai dalam pemodelan ini pada umumnya adalah dengan menggunakan beberapa data terakhir untuk melakukan prediksi. Padahal, kegagalan perangkat lunak dapat saja dipengaruhi oleh data yang terdahulu seperti yang telah dibuktikan pada satu penelitian, yang menggunakan teknik model mining untuk memilih data masukan terdahulu tersebut. Pada penelitian ini, diusulkan penggunaan Binary Particle Swarm Optimization (BPSO) sebagai metode model mining untuk melakukan prediksi reliabilitas perangkat lunak dengan menggunakan Support Vector Regression (SVR). Data yang dipakai atau tidak dipakai masing-masing disimbolkan dengan angka “1” atau “0” dan metode ini diujicobakan pada 6 data dari proyek perangkat lunak yang nyata, yaitu data FC1, FC2, FC3, TBF1, TBF2, dan TBF3. Keakuratan model yang diusulkan dibandingkan dengan prediksi yang tidak menggunakan model mining dengan mengukur nilai Mean Squared Error (MSE) dan Average Relative Prediction Error (AE). Metode SVR-BPSO yang diusulkan terbukti dapat menghasilkan prediksi yang lebih akurat, terutama untuk data FC1, FC2, dan FC3 yang bersifat stabil. Sifat data TBF yang berbeda dengan data FC menunjukkan bahwa data ini tidak cocok digunakan sebagai bahan uji coba metode yang diusulkan karena time-between-failure pada data tidak bergantung pada urutan kegagalan tertentu, seperti yang terlihat pada data TBF1, TBF2, dan TBF3. Pemilihan parameter SVR juga mempengaruhi keakuratan prediksi, dimana hal ini dapat diperbaiki pada penelitian selanjutnya. Secara umum, metode yang diusulkan telah dapat menghasilkan prediksi reliabilitas perangkat lunak dengan baik dan penggunaan model mining terbukti dapat memberikan manfaat yang nyata dalam bidang prediksi reliabilitas perangkat lunak. ================================================================= Software reliability is defined as the pobability of failure-free software operation in certain period of time. The modelling of software reliability can be done in one way by using software failure data to predict the future failures. One architecture in this modelling is done generally by using the last few consecutive data to predict the future value, where actually the failure of a software can be dependent also to earlier data as showed in one research about the use of model mining to determine which data to use as prediction. In this research, we propose the use of Binary Particle Swarm Optimization (BPSO) as a model mining method to predict the reliability of software by using Support Vector Regression (SVR) as predictor. To determine which data to use in model mining, the data is symbolized with one “1” or “0” in the structure of BPSO particle. The proposed method is tested with 6 real data from real project, which are called FC1, FC2, FC3, TBF1, TBF2, and TBF3. The accuracy of the proposed model is compared with a predictor without model mining by computing the Mean Squared Error (MSE) and Average Relative Prediction Error (AE). The proposed SVR-BPSO method is proved to be able to predict more accurately, especially in FC1, FC2, and FC3 data which are more stable in nature. The use of TBF data sets proved to be inappropriate as it yields poor prediction results in TBF1, TBF2, and TBF3 data, which may have rooted from the differing nature with FC data. The method to choose SVR parameters can also affect the accuracy of prediction, which opens room for improvement in future research. In general, the proposed method is able to predict the reliability of a software and the use of model mining is important in effort to produce more accurate prediction in software failure data

    Towards Comprehensive Foundations of Computational Intelligence

    Full text link
    Abstract. Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Nonlinear Parametric and Neural Network Modelling for Medical Image Classification

    Get PDF
    System identification and artificial neural networks (ANN) are families of algorithms used in systems engineering and machine learning respectively that use structure detection and learning strategies to build models of complex systems by taking advantage of input-output type data. These models play an essential role in science and engineering because they fill the gap in those cases where we know the input-output behaviour of a system, but there is not a mathematical model to understand and predict its changes in future or even prevent threats. In this context, the nonlinear approximation of systems is nowadays very popular since it better describes complex instances. On the other hand, digital image processing is an area of systems engineering that is expanding the analysis dimension level in a variety of real-life problems while it is becoming more attractive and affordable over time. Medicine has made the most of it by supporting important human decision-making processes through computer-aided diagnosis (CAD) systems. This thesis presents three different frameworks for breast cancer detection, with approaches ranging from nonlinear system identification, nonlinear system identification coupled with simple neural networks, to multilayer neural networks. In particular, the nonlinear system identification approaches termed the Nonlinear AutoRegressive with eXogenous inputs (NARX) model and the MultiScales Radial Basis Function (MSRBF) neural networks appear for the first time in image processing. Along with the above contributions takes place the presentation of the Multilayer-Fuzzy Extreme Learning Machine (ML-FELM) neural network for faster training and more accurate image classification. A central research aim is to take advantage of nonlinear system identification and multilayer neural networks to enhance the feature extraction process, while the classification in CAD systems is bolstered. In the case of multilayer neural networks, the extraction is carried throughout stacked autoencoders, a bottleneck network architecture that promotes a data transformation between layers. In the case of nonlinear system identification, the goal is to add flexible models capable of capturing distinctive features from digital images that might be shortly recognised by simpler approaches. The purpose of detecting nonlinearities in digital images is complementary to that of linear models since the goal is to extract features in greater depth, in which both linear and nonlinear elements can be captured. This aim is relevant because, accordingly to previous work cited in the first chapter, not all spatial relationships existing in digital images can be explained appropriately with linear dependencies. Experimental results show that the methodologies based on system identification produced reliable images models with customised mathematical structure. The models came to include nonlinearities in different proportions, depending upon the case under examination. The information about nonlinearity and model structure was used as part of the whole image model. It was found that, in some instances, the models from different clinical classes in the breast cancer detection problem presented a particular structure. For example, NARX models of the malignant class showed higher non-linearity percentage and depended more on exogenous inputs compared to other classes. Regarding classification performance, comparisons of the three new CAD systems with existing methods had variable results. As for the NARX model, its performance was superior in three cases but was overcame in two. However, the comparison must be taken with caution since different databases were used. The MSRBF model was better in 5 out of 6 cases and had superior specificity in all instances, overcoming in 3.5% the closest model in this line. The ML-FELM model was the best in 6 out of 6 cases, although it was defeated in accuracy by 0.6% in one case and specificity in 0.22% in another one

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility
    corecore