103 research outputs found

    Безконтактно измервателно устройство за определяне масата на птичи яйца

    Get PDF
    In this article a possibility of application of the non-contact method for prediction the poultry eggs weight in storage period of 21 days was researched. A prediction assessment was made by shape features, capacitance, resistance and conductance of eggs. Feature vectors were selected and reduced data by principal component and partial least squares regression methods were used. A non-contact device was proposed and developed to determine the egg weight by video sensor and measurement cell. In order to obtain data for post-processing, a software application was designed. The developed algorithms and procedures were applied to determine the eggs weight, whereby that parameter of eggs could be predicted with the lowest relative error. The survey results show that the eggs weight could be predicted by the proposed system for contactless measurement with accuracy of 94-98%.В тази статия е проучена възможността за прилагане на безконтактен метод за прогнозиране масата на птичи яйца в период на съхранение от 21 дни. Направена е оценка на възможността за прогнозиране по признаци на формата, капацитет, електрическо съпротивление и електрическа проводимост на яйцата. Избрани са характеристични вектори и са използвани редуцирани данни при регресия по главни компоненти (PCR) и метод частична регресия на най-малките квадрати (PLSR). Предложено и разработено е безконтактно устройство за определяне масата на яйца чрез видео сензор и измервателна клетка. За получаване на данни за последваща обработка, е създадено програмно приложение. Разработените алгоритми и процедури са приложени за определяне масата на яйца, при което този параметър може да се прогнозира с най-ниската относителна грешка. Резултатите от изследването показват, че масата на яйцата може да се прогнозира чрез предложената система за безконтактно измерване с точност от 94-98%

    ESTIMASI BOBOT TELUR MENGGUNAKAN JARINGAN SYARAF TIRUAN BERDASARKAN PROPERTI GEOMETRI DARI CITRA DIGITAL

    Get PDF
    Bobot telur memegang peranan penting dalam kalsifikasi telur yang dijual di pasar. Menurut Standar Nasional Indonesia telur ayam konsumsi diklasifikasikan berdasarkan warna kerabang dan bobotnya. Umumnya bobot telur diukur dengan menggunakan timbangan digital untuk memperoleh hasil pengukuran yang akurat. Tetapi penggunaan timbangan ini tidak dapat diterapkan pada sistem klasifikasi telur di industri skala besar karena tidak efisien secara waktu. Sistem visi komputer menawarkan alternatif yang akurat dan efisien untuk mengukur bobot telur dari citra digital. Makalah ini mengusulkan metode untuk mengestimasi bobot telur menggunakan jaringan syaraf tiruan berdasarkan properti geometri telur yang diekstrak dari citra digital. Citra telur ditangkap dengan latar belakang berwarna hitam menggunakan kamera digital. Citra yang ditangkap kemudian diolah untuk mendapatkan citra biner. Properti geometri telur yang terdiri dari panjang, lebar, luas, dan keliling diekstrak dari objek telur pada citra. Properti geometri ini kemudian digunakan sebagai variabel input jaringan syaraf tiruan untuk mengestimasi bobot telur. Hasil eksperimen menunjukkan bahwa hasil estimasi bobot telur menggunakan metode yang diusulkan mempunyai akurasi yang baik dengan rata-rata prosentase kesalahan mutlak sebesar 2,27%. Selain itu, hasil uji statistik menunjukkan bahwa hasil estimasi metode yang diusulkan tidak berbeda secara signifikan dengan hasil pengukuran bobot menggunakan timbangan digital. Dari segi waktu, metode yang diusulkan merupakan metode estimasi bobot telur yang efisien dengan rata-rata waktu komputasi yang diperlukan untuk mengestimasi bobot sebutir telur kurang dari 0,1 detik

    Data mining using intelligent systems : an optimized weighted fuzzy decision tree approach

    Get PDF
    Data mining can be said to have the aim to analyze the observational datasets to find relationships and to present the data in ways that are both understandable and useful. In this thesis, some existing intelligent systems techniques such as Self-Organizing Map, Fuzzy C-means and decision tree are used to analyze several datasets. The techniques are used to provide flexible information processing capability for handling real-life situations. This thesis is concerned with the design, implementation, testing and application of these techniques to those datasets. The thesis also introduces a hybrid intelligent systems technique: Optimized Weighted Fuzzy Decision Tree (OWFDT) with the aim of improving Fuzzy Decision Trees (FDT) and solving practical problems. This thesis first proposes an optimized weighted fuzzy decision tree, incorporating the introduction of Fuzzy C-Means to fuzzify the input instances but keeping the expected labels crisp. This leads to a different output layer activation function and weight connection in the neural network (NN) structure obtained by mapping the FDT to the NN. A momentum term was also introduced into the learning process to train the weight connections to avoid oscillation or divergence. A new reasoning mechanism has been also proposed to combine the constructed tree with those weights which had been optimized in the learning process. This thesis also makes a comparison between the OWFDT and two benchmark algorithms, Fuzzy ID3 and weighted FDT. SIx datasets ranging from material science to medical and civil engineering were introduced as case study applications. These datasets involve classification of composite material failure mechanism, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) signals, eye bacteria prediction and wave overtopping prediction. Different intelligent systems techniques were used to cluster the patterns and predict the classes although OWFDT was used to design classifiers for all the datasets. In the material dataset, Self-Organizing Map and Fuzzy C-Means were used to cluster the acoustic event signals and classify those events to different failure mechanism, after the classification, OWFDT was introduced to design a classifier in an attempt to classify acoustic event signals. For the eye bacteria dataset, we use the bagging technique to improve the classification accuracy of Multilayer Perceptrons and Decision Trees. Bootstrap aggregating (bagging) to Decision Tree also helped to select those most important sensors (features) so that the dimension of the data could be reduced. Those features which were most important were used to grow the OWFDT and the curse of dimensionality problem could be solved using this approach. The last dataset, which is concerned with wave overtopping, was used to benchmark OWFDT with some other Intelligent Systems techniques, such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Genetic Neural Mathematical Method (GNMM) and Fuzzy ARTMAP. Through analyzing these datasets using these Intelligent Systems Techniques, it has been shown that patterns and classes can be found or can be classified through combining those techniques together. OWFDT has also demonstrated its efficiency and effectiveness as compared with a conventional fuzzy Decision Tree and weighted fuzzy Decision Tree

    Slantlet transform-based segmentation and α -shape theory-based 3D visualization and volume calculation methods for MRI brain tumour

    Get PDF
    Magnetic Resonance Imaging (MRI) being the foremost significant component of medical diagnosis which requires careful, efficient, precise and reliable image analyses for brain tumour detection, segmentation, visualisation and volume calculation. The inherently varying nature of tumour shapes, locations and image intensities make brain tumour detection greatly intricate. Certainly, having a perfect result of brain tumour detection and segmentation is advantageous. Despite several available methods, tumour detection and segmentation are far from being resolved. Meanwhile, the progress of 3D visualisation and volume calculation of brain tumour is very limited due to absence of ground truth. Thus, this study proposes four new methods, namely abnormal MRI slice detection, brain tumour segmentation based on Slantlet Transform (SLT), 3D visualization and volume calculation of brain tumour based on Alpha (α) shape theory. In addition, two new datasets along with ground truth are created to validate the shape and volume of the brain tumour. The methodology involves three main phases. In the first phase, it begins with the cerebral tissue extraction, followed by abnormal block detection and its fine-tuning mechanism, and ends with abnormal slice detection based on the detected abnormal blocks. The second phase involves brain tumour segmentation that covers three processes. The abnormal slice is first decomposed using the SLT, then its significant coefficients are selected using Donoho universal threshold. The resultant image is composed using inverse SLT to obtain the tumour region. Finally, in the third phase, four original ideas are proposed to visualise and calculate the volume of the tumour. The first idea involves the determination of an optimal α value using a new formula. The second idea is to merge all tumour points for all abnormal slices using the α value to form a set of tetrahedrons. The third idea is to select the most relevant tetrahedrons using the α value as the threshold. The fourth idea is to calculate the volume of the tumour based on the selected tetrahedrons. In order to evaluate the performance of the proposed methods, a series of experiments are conducted using three standard datasets which comprise of 4567 MRI slices of 35 patients. The methods are evaluated using standard practices and benchmarked against the best and up-to-date techniques. Based on the experiments, the proposed methods have produced very encouraging results with an accuracy rate of 96% for the abnormality slice detection along with sensitivity and specificity of 99% for brain tumour segmentation. A perfect result for the 3D visualisation and volume calculation of brain tumour is also attained. The admirable features of the results suggest that the proposed methods may constitute a basis for reliable MRI brain tumour diagnosis and treatments

    Sistema de clasificación de huevos mediante un algoritmo de visión artificial

    Get PDF
    Desarrollar un sistema de clasificación de huevos mediante un algoritmo de visión artificial.El presente trabajo muestra el desarrollo de un algoritmo de visión artificial basado en código abierto que permite clasificar distintos huevos de gallina mediante la estimación del peso a través de una imagen en 2D; para lograr el objetivo en este trabajo el peso se calcula tomando en consideración únicamente dos factores geométricos que son el ancho y alto del producto. El algoritmo inicia con la adquisición de la imagen, para posteriormente realizar el procesamiento de la imagen, detectar los contornos, analizar las dimensiones y finalizar calculando el peso y clasificando el producto. El funcionamiento del algoritmo se basa en una escena de pruebas el cual se trata de un ambiente controlado que consta de una cámara web Genius FaceCam 1000X y una fuente de luz ubicadas de manera frontal a la base. Los criterios de clasificación se basan en la norma técnica ecuatoriana para ovoproductos NTE INEN 1973. Para validar el funcionamiento del algoritmo se realizó las respectivas pruebas con 21 huevos de gallina seleccionados al azar obteniendo 19 coincidencias, teniendo como resultado una eficiencia de 90,47% en la clasificación, y un 97,32% de eficiencia en la estimación del peso.Ingenierí

    Application of Novel Thermal Technology in Foods Processing

    Get PDF
    Advanced and novel thermal technologies, such as ohmic heating, dielectric heating (e.g., microwave heating and radio frequency heating), and inductive heating, have been developed to improve the effectiveness of heat processing whilst guaranteeing food safety and eliminating undesirable impacts on the organoleptic and nutritional properties of foods. Novel thermal technologies rely on heat generation directly inside foods, which has implications for improving the overall energy efficiency of the heating process itself. The use of novel thermal technologies is dependent on the complexity and inherent properties of the food materials of interest (e.g., thermal conductivity, electrical resistance, water content, pH, rheological properties, food porosity, and presence of particulates). Moreover, there is a need to address the combined use of thermal processing with emerging technologies such as pulsed electric fields, high hydrostatic pressure, and ultrasound to complement the conventional thermal processing of fluid or solid foods. This Special Issue provides readers with an overview of the latest applications of various novel technologies in food processing. A total of eight cutting-edge original research papers and one comprehensive review paper discussing novel processing technologies from the perspectives of food safety, sustainability, process engineering, (bio)chemical changes, health, nutrition, sensory issues, and consumers are covered in this Special Issue

    Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications

    Get PDF
    Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable
    corecore