103 research outputs found

    Combining group method of data handling models using artificial bee colony algorithm for time series forecasting

    Get PDF
    Time series forecasting which uses models to predict future values based on some historical data is an important area of forecasting, and has gained the attention of researchers from various related fields of study. In line with its popularity, various models have been introduced for producing accurate time series forecasts. However, to produce an accurate forecast is not an easy feat especially when dealing with nonlinear data due to the abstract nature of the data. In this study, a model for accurate time series forecasting based on Artificial Bee Colony (ABC) algorithm and Group Method of Data Handling (GMDH) models with variant transfer functions, namely polynomial, sigmoid, radial basis function and tangent was developed. Initially, in this research, the GMDH models were used to forecast the time series data followed by each forecast that was combined using ABC. Then, the ABC produced the weight for each forecast before aggregating the forecasts. To evaluate the performance of the developed GMDH-ABC model, input data on tourism arrivals (Singapore and Indonesia) and airline passengers’ data were processed using the model to produce reliable forecast on the time series data. To validate the evaluation, the performance of the model was compared against benchmark models such as the individual GMDH models, Artificial Neural Network (ANN) model and combined GMDH using simple averaging (GMDH-SA) model. Experimental results showed that the GMDH-ABC model had the highest accuracy compared to the other models, where it managed to reduce the Root Mean Square Error (RMSE) of the conventional GMDH model by 15.78% for Singapore data, 28.2% for Indonesia data and 30.89% for airline data. As a conclusion, these results demonstrated the reliability of the GMDH-ABC model in time series forecasting, and its superiority when compared to the other existing models

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Pathological Brain Detection by a Novel Image Feature—Fractional Fourier Entropy

    Full text link
    Aim: To detect pathological brain conditions early is a core procedure for patients so as to have enough time for treatment. Traditional manual detection is either cumbersome, or expensive, or time-consuming. We aim to offer a system that can automatically identify pathological brain images in this paper.Method: We propose a novel image feature, viz., Fractional Fourier Entropy (FRFE), which is based on the combination of Fractional Fourier Transform(FRFT) and Shannon entropy. Afterwards, the Welch’s t-test (WTT) and Mahalanobis distance (MD) were harnessed to select distinguishing features. Finally, we introduced an advanced classifier: twin support vector machine (TSVM). Results: A 10 x K-fold stratified cross validation test showed that this proposed “FRFE +WTT + TSVM” yielded an accuracy of 100.00%, 100.00%, and 99.57% on datasets that contained 66, 160, and 255 brain images, respectively. Conclusions: The proposed “FRFE +WTT + TSVM” method is superior to 20 state-of-the-art methods

    Challenges and opportunities of deep learning models for machinery fault detection and diagnosis: a review

    Get PDF
    In the age of industry 4.0, deep learning has attracted increasing interest for various research applications. In recent years, deep learning models have been extensively implemented in machinery fault detection and diagnosis (FDD) systems. The deep architecture's automated feature learning process offers great potential to solve problems with traditional fault detection and diagnosis (TFDD) systems. TFDD relies on manual feature selection, which requires prior knowledge of the data and is time intensive. However, the high performance of deep learning comes with challenges and costs. This paper presents a review of deep learning challenges related to machinery fault detection and diagnosis systems. The potential for future work on deep learning implementation in FDD systems is briefly discussed

    Improved Texture Feature Extraction and Selection Methods for Image Classification Applications

    Get PDF
    Classification is an important process in image processing applications, and image texture is the preferable source of information in images classification, especially in the context of real-world applications. However, the output of a typical texture feature descriptor often does not represent a wide range of different texture characteristics. Many research studies have contributed different descriptors to improve the extraction of features from texture. Among the various descriptors, the Local Binary Patterns (LBP) descriptor produces powerful information from texture by simple comparison between a central pixel and its neighbour pixels. In addition, to obtain sufficient information from texture, many research studies have proposed solutions based on combining complementary features together. Although feature-level fusion produces satisfactory results for certain applications, it suffers from an inherent and well-known problem called “the curse of dimensionality’’. Feature selection deals with this problem effectively by reducing the feature dimensions and selecting only the relevant features. However, large feature spaces often make the process of seeking optimum features complicated. This research introduces improved feature extraction methods by adopting a new approach based on new texture descriptors called Local Zone Binary Patterns (LZBP) and Local Multiple Patterns (LMP), which are both based on the LBP descriptor. The produced feature descriptors are combined with other complementary features to yield a unified vector. Furthermore, the combined features are processed by a new hybrid selection approach based on the Artificial Bee Colony and Neighbourhood Rough Set (ABC-NRS) to efficiently reduce the dimensionality of the resulting features from the feature fusion stage. Comprehensive experimental testing and evaluation is carried out for different components of the proposed approach, and the novelty and limitation of the proposed approach have been demonstrated. The results of the evaluation prove the ability of the LZBP and LMP texture descriptors in improving feature extraction compared to the conventional LBP descriptor. In addition, the use of the hybrid ABC-NRS selection method on the proposed combined features is shown to improve the classification performance while achieving the shortest feature length. The overall proposed approach is demonstrated to provide improved texture-based image classification performance compared to previous methods using benchmarks based on outdoor scene images. These research contributions thus represent significant advances in the field of texture-based image classification

    Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Get PDF
    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method
    corecore