102,305 research outputs found

    Optimized Naïve Bayesian Algorithm for Efficient Performance

    Get PDF
    Naïve Bayesian algorithm is a data mining algorithm that depicts relationship between data objects using probabilistic method. Classification using Bayesian algorithm is usually done by finding the class that has the highest probability value. Data mining is a popular research area that consists of algorithm development and pattern extraction from database using different algorithms. Classification is one of the major tasks of data mining which aimed at building a model (classifier) that can be used to predict unknown class labels. There are so many algorithms for classification such as decision tree classifier, neural network, rule induction and naïve Bayesian. This paper is focused on naïve Bayesian algorithm which is a classical algorithm for classifying categorical data. It easily converged at local optima. Particle Swarm Optimization (PSO) algorithm has gained recognition in many fields of human endeavours and has been applied to enhance efficiency and accuracy in different problem domain. This paper proposed an optimized naïve Bayesian classifier using particle swarm optimization to overcome the problem of premature convergence and to improve the efficiency of the naïve Bayesian algorithm. The classification result from the optimized naïve Bayesian when compared with the traditional algorithm showed a better performance Keywords: Data Mining, Classification, Particle Swarm Optimization, Naïve Bayesian

    Forecasting day-ahead electricity prices in Europe: the importance of considering market integration

    Full text link
    Motivated by the increasing integration among electricity markets, in this paper we propose two different methods to incorporate market integration in electricity price forecasting and to improve the predictive performance. First, we propose a deep neural network that considers features from connected markets to improve the predictive accuracy in a local market. To measure the importance of these features, we propose a novel feature selection algorithm that, by using Bayesian optimization and functional analysis of variance, evaluates the effect of the features on the algorithm performance. In addition, using market integration, we propose a second model that, by simultaneously predicting prices from two markets, improves the forecasting accuracy even further. As a case study, we consider the electricity market in Belgium and the improvements in forecasting accuracy when using various French electricity features. We show that the two proposed models lead to improvements that are statistically significant. Particularly, due to market integration, the predictive accuracy is improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage error). In addition, we show that the proposed feature selection algorithm is able to perform a correct assessment, i.e. to discard the irrelevant features

    BOCK : Bayesian Optimization with Cylindrical Kernels

    Get PDF
    A major challenge in Bayesian Optimization is the boundary issue (Swersky, 2017) where an algorithm spends too many evaluations near the boundary of its search space. In this paper, we propose BOCK, Bayesian Optimization with Cylindrical Kernels, whose basic idea is to transform the ball geometry of the search space using a cylindrical transformation. Because of the transformed geometry, the Gaussian Process-based surrogate model spends less budget searching near the boundary, while concentrating its efforts relatively more near the center of the search region, where we expect the solution to be located. We evaluate BOCK extensively, showing that it is not only more accurate and efficient, but it also scales successfully to problems with a dimensionality as high as 500. We show that the better accuracy and scalability of BOCK even allows optimizing modestly sized neural network layers, as well as neural network hyperparameters.Comment: 10 pages, 5 figures, 5 tables, 1 algorith

    HMBO-LDC: A Hybrid Model Employing Reinforcement Learning with Bayesian Optimization for Long Document Classification

    Get PDF
    With the emergence of distributed computing platforms and cloud-big data eco-system, there has been increased growth of textual documents stored in cloud infrastructure. It is observed that most of the documents happened to be lengthy. Automatic classification of such documents is made possible with deep learning models. However, it is observed that deep learning models like CNN and its variants do have many hyper parameters that are to be optimized in order to leverage classification performance. The existing optimization methods based on random search are found to have suboptimal performance when compared with Bayesian Optimization (BO). However, BO has issues pertaining to choice of covariance function, time consumption and support for multi-core parallelism. To address these limitations, we proposed an algorithm named Enhanced Bayesian Optimization (EBO) designed to optimize hyper parameter tuning. We also proposed another algorithm known as Hybrid Model with Bayesian Optimization for Long Document Classification (HMBO-LDC). The latter invokes the former appropriately in order to improve parameter optimization of the proposed hybrid model prior to performing long document classification. HMBO-LDC is evaluated and compared against existing models such as CNN feature aggregation method, CNN with LSTM and CNN with recurrent attention model. Experimental results revealed that HMBO-LDC outperforms other methods with highest classification accuracy 98.76%

    A Semi-parametric Technique for the Quantitative Analysis of Dynamic Contrast-enhanced MR Images Based on Bayesian P-splines

    Full text link
    Dynamic Contrast-enhanced Magnetic Resonance Imaging (DCE-MRI) is an important tool for detecting subtle kinetic changes in cancerous tissue. Quantitative analysis of DCE-MRI typically involves the convolution of an arterial input function (AIF) with a nonlinear pharmacokinetic model of the contrast agent concentration. Parameters of the kinetic model are biologically meaningful, but the optimization of the non-linear model has significant computational issues. In practice, convergence of the optimization algorithm is not guaranteed and the accuracy of the model fitting may be compromised. To overcome this problems, this paper proposes a semi-parametric penalized spline smoothing approach, with which the AIF is convolved with a set of B-splines to produce a design matrix using locally adaptive smoothing parameters based on Bayesian penalized spline models (P-splines). It has been shown that kinetic parameter estimation can be obtained from the resulting deconvolved response function, which also includes the onset of contrast enhancement. Detailed validation of the method, both with simulated and in vivo data, is provided

    Comparison of CNN Classification Model using Machine Learning with Bayesian Optimizer

    Get PDF
    One of the best-known and frequently used areas of Deep Learning in image processing is the Convolutional Neural Network (CNN), which has architectural designs such as Inceptionv3, DenseNet201, Resnet50, and MobileNet used in image classification and pattern recognition. Furthermore, the CNN extracts feature from the image according to the designed architecture and performs classification through the fully connected layer, which executes the Machine Learning (ML) algorithm tasks. Examples of ML that are commonly used include Naive Bayes (NB), k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Decision Tree (DT). This research was conducted based on an AI model development background and the need for a system to diagnose COVID-19 quickly and accurately. The aim was to classify the aforementioned CNN models with ML algorithms and compare the models’ accuracy before and after Bayesian optimization using CXR lung images with a total of 2000 data. Consequently, the CNN extracted 80% of the training data and 20% for testing, which was assigned to four different ML models for classification with the use of Bayesian optimization to ensure the best accuracy. It was observed that the best model classification was generated by the MobileNetV2-SVM structure with an accuracy of 93%. Therefore, the accuracy obtained using the SVM algorithm is higher than the other three ML algorithms. Doi: 10.28991/HIJ-2023-04-03-05 Full Text: PD
    • …
    corecore