13 research outputs found

    Lettuce life stage classification from texture attributes using machine learning estimators and feature selection processes

    Get PDF
    Classification of lettuce life or growth stages is an effective tool for measuring the performance of an aquaponics system. It determines the balance in water nutrients, adequate temperature and lighting, other environmental factors, and the system’s productivity to sustain cultivars. This paper proposes a classification of lettuce life stages planted in an aquaponics system. The classification was done using the texture features of the leaves derived from machine vision algorithms. The attributes underwent three different feature selection processes, namely: Univariate Selection (US), Recursive Feature Elimination (RFE), and Feature Importance (FI) to determine the four most significant features from the original eight attributes. The features selected were used for training four estimators from Decision Trees Classifier (DTC), Gaussian Naïve Bayes (GNB), Stochastic Gradient Descent (SGD), and Linear Discriminant Analysis (LDA). The models trained using DTC and SGD were then optimized as they have hyperparameters for tuning. A comparative analysis among Machine Learning (ML) algorithms was conducted to identify the best-performing model with the given application. The best features were derived from US and FI as they have the same top four features using the DTC estimator optimized with the hyperparameters tuned to max depth having 5, criterion equated to ‘Gini', and splitter was set to 'Best'. The accuracy obtained from cross-validation evaluation resulted in 87.92%. Considering consistency with hold-out validation, LDA outperforms optimized DTC even with lower accuracy of 86.67%. This accuracy of LDA outperformed DTC due to its sufficient fit for generalizing the testing data on classifying lettuce growth stage

    Lettuce growth stage identification based on phytomorphological variations using coupled color superpixels and multifold watershed transformation

    Get PDF
    Identifying the plant's developmental growth stages from seed leaf is crucial to understand plant science and cultivation management deeply. An efficient vision-based system for plant growth monitoring entails optimum segmentation and classification algorithms. This study presents coupled color-based superpixels and multifold watershed transformation in segmenting lettuce plant from complicated background taken from smart farm aquaponic system, and machine learning models used to classify lettuce plant growth as vegetative, head development and for harvest based on phytomorphological profile. Morphological computations were employed by feature extraction of the number of leaves, biomass area and perimeter, convex area, convex hull area and perimeter, major and minor axis lengths of the major axis length the dominant leaf, and length of plant skeleton. Phytomorphological variations of biomass compactness, convexity, solidity, plant skeleton, and perimeter ratio were included as inputs of the classification network. The extracted Lab color space information from the training image set undergoes superpixels overlaying with 1,000 superpixel regions employing K-means clustering on each pixel class. Six-level watershed transformation with distance transformation and minima imposition was employed to segment the lettuce plant from other pixel objects. The accuracy of correctly classifying the vegetative, head development, and harvest growth stages are 88.89%, 86.67%, and 79.63%, respectively. The experiment shows that the test accuracy rates of machine learning models were recorded as 60% for LDA, 85% for ANN, and 88.33% for QSVM. Comparative analysis showed that QSVM bested the performance of optimized LDA and ANN in classifying lettuce growth stages. This research developed a seamless model in segmenting vegetation pixels, and predicting lettuce growth stage is essential for plant computational phenotyping and agricultural practice optimization

    Development of a multifunctional farm-to-table business intelligence system for smart aquaponics using computational intelligence

    No full text
    The Philippines has one of the most internet users in the world, this has led to different sectors to shift to digital and online platforms. One area that experienced tremendous change and development is the business sector, the extensive use of social media paved way to market goods and services to their intended audiences making business transactions effective and efficient. To cater this, e-commerce has been developed. It is the exchange of products and services through an online platform with the convenience of having it delivered to the consumers’ doorstep. Predictions show that e-commerce would continually grow by the following years due to the support of the government, the growing middle class, steady reduction of poverty, and access to technology. However, this platform and support is not extended to the agricultural sector and its products. Farmers are struggling to sell their harvests resulting to food wastage and a volatile market. Research show that this is due to poor Food Supply Chain Management (FSCM) or the process of harvesting, storing, and delivering products to consumers. One-third of farm produce are wasted annually, even though food security is one of the most pressing matters today because of the exponential growth of the population. Efforts are made to increase product yield and quality through smart farming, but it could only do so much if products would not reach the tables of consumers. The technology and resources are already existing, this study proposes to integrate e-commerce and smart farming to improve the FSCM through the use of Business Intelligence (BI) models, the Internet of Things (IoT) and open-source mobile applications to address the disjunction between producers and consumers

    Classification of landcover from combined LiDAR and orthophotos using support vector machine

    No full text
    © 2019 IEEE. The study is based on the Landcover classification from combined light detection and ranging (LiDAR) data and orthophotos. Five land classes were extracted namely: barren, build up, low vegetation, mango, and non-agricultural trees. Support vector machine (SVM) was the algorithm used for the classification. Different LiDAR derivatives and orthophoto were used as an input which are intensity, digital terrain model (DTM), digital surface model (DSM), normalized digital surface model (NDSM), and RGB combination of orthophotos. The applied algorithm has 100% accuracy based on the confusion matrix which means that SVM is a good algorithm in classification of landcover from combined LiDAR and orthophotos given that the right LiDAR derivatives were used

    Lettuce life stage classification from texture attributes using machine learning estimators and feature selection processes

    No full text
    Classification of lettuce life or growth stages is an effective tool for measuring the performance of an aquaponics system. It determines the balance in water nutrients, adequate temperature and lighting, other environmental factors, and the system’s productivity to sustain cultivars. This paper proposes a classification of lettuce life stages planted in an aquaponics system. The classification was done using the texture features of the leaves derived from machine vision algorithms. The attributes underwent three different feature selection processes, namely: Univariate Selection (US), Recursive Feature Elimination (RFE), and Feature Importance (FI) to determine the four most significant features from the original eight attributes. The features selected were used for training four estimators from Decision Trees Classifier (DTC), Gaussian Naïve Bayes (GNB), Stochastic Gradient Descent (SGD), and Linear Discriminant Analysis (LDA). The models trained using DTC and SGD were then optimized as they have hyperparameters for tuning. A comparative analysis among Machine Learning (ML) algorithms was conducted to identify the best-performing model with the given application. The best features were derived from US and FI as they have the same top four features using the DTC estimator optimized with the hyperparameters tuned to max depth having 5, criterion equated to ‘Gini\u27, and splitter was set to \u27Best\u27. The accuracy obtained from cross-validation evaluation resulted in 87.92%. Considering consistency with hold-out validation, LDA outperforms optimized DTC even with lower accuracy of 86.67%. This accuracy of LDA outperformed DTC due to its sufficient fit for generalizing the testing data on classifying lettuce growth stage. © 2020, Universitas Ahmad Dahlan. All rights reserved

    Utilization of genetic algorithm in classifying Filipino and Korean music through distinct windowing and perceptual features

    No full text
    Classification of songs or music in terms of genre, era and any other categories has been sought to be one of the most common yet significant research fields in digital signal processing. Usually, the aim to distinguish musical patterns is only limited to one general type (e.g., American Music). The objective of this study is to perceive the differences and similarities between two general categories namely: OPM (Original Pilipino Music), the apparent representative music of the Philippines and one of the fastest growing music industries K-POP, a general term for contemporary Korean Music. Through the features acquired from jAudio and aid of a genetic algorithm model constructed in Python with the accompaniment of the TPOT library, this research is successful in classifying the music under various settings and desired outputs. © 2019 IEEE

    Detection and classification of public security threats in the Philippines using neural networks

    No full text
    Life being put into jeopardy when in public has always been Filipinos\u27 concern. While there are reinforcements of laws, and common practices taught, these are no more than just band-aid solutions to the problem. With the immediate detection and classification of common public security threats through the videos fed from CCTVs, it will be an immense help to protect Filipinos. In this study, the use of pre-trained R-CNN model inception v2 alongside tools for other phases such as annotation, training, and testing will be discussed. The process through which the study attained the goal of the system will be highlighted. © 2020 IEEE

    Android application for chest x-ray health classification from a CNN deep learning TensorFlow model

    No full text
    © 2020 IEEE. One of the problems in the medical field is incorrect diagnosis, particularly over-diagnosis and under diagnosis. One of the illnesses that is currently researched upon is pneumonia. Several methodologies are employed to further validate this diagnosis. Often, to achieve the goal, medical experts rely on an x-ray image. In this study, the basis is still x-ray images with the incorporation of image processing and machine learning. MobileNetV2 is utilized as the convolution neural network model. The produced frozen graph is injected to Android Studio to produce an android mobile application which will serve as a diagnostic tool. The mobile application has high accuracy and considered reliable because of testing and validation results. This study generally aims to provide a reliable low-cost aid for medical professionals in diagnosing pneumonia

    Faster R-CNN model with momentum optimizer for RBC and WBC variants classification

    No full text
    Since many diseases and infections are dependent on the count and type of Red Blood Cells (RBCs) and White Blood Cells (WBCs) present in the blood stream, detection and classification pertaining to them is necessary and relevant. Based from existing related literature, ordinary Neural Networks are usually employed. Also, in existing researches, RBC types are the main focus. Hence, after observing research gaps, a Faster Region-based Convolutional Neural Network (Faster R-CNN) was utilized for this study, focusing not only on RBCs but also on the variants of WBCs. The aim is to have a fast and reliable system in order to achieve the goal of aiding the medical field in the classification of RBCs and WBCs. © 2020 IEEE

    A machine learning approach of lattice infill pattern for increasing material efficiency in additive manufacturing processes

    No full text
    Additive Manufacturing (AM) has become ubiquitous in manufacturing three-dimensional objects through 3D printing. Traditional analytical models are still widely utilized for low-cost 3D Printing, which is deficient in terms of process, structure, property and performance relationship for AM. This paper focuses on the introduction of a new infill pattern-the lattice infill to increase material efficiency of 3D prints, coupled with Machine Learning (ML) technique to address geometric corrections in modelling the shape deviations of AM. Encompassed by ML algorithms, the neural network (NN) is used to handle the large dataset of the system. The 3D coordinates of the proposed infill pattern are extracted as the input of the NN model. The optimization technique of scaled conjugate gradient (SCG) is the algorithm used to train the feedforward ANN, and sigmoidal function was used as the activation type for output neurons. There is 0.00776625 cross-entropy (CE) performance and 98.8% accuracy during network training. The trained network is implemented to STL file for geometric corrections of the lattice infill pattern then made in a 3D printer slicing software. Conventional designs such as the cubic and grid infill pattern were also made for comparison. Engineering simulation software were used to simulate all three infill patterns, to measure approximate product weight, stress performance and displacement, given that there is an external force applied. Comparisons showed that the new infill pattern is more efficient than conventional infill patterns saving material up to 61.3%. Essentially increasing the amount of prints produced per spool by 2.5 times. The structure of the proposed design can also resist up to 1.6kN of compressive load prior to breaking. © 2020 by the authors
    corecore