53 research outputs found
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Volume measurement plays an important role in the production and processing of food products. Various methods have been
proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction
comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction
have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs
volume measurements using random points. Monte Carlo method only requires information regarding whether random points
fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a
computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with
heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images.
Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from
binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the
water displacement method. In addition, the proposed method is more accurate and faster than the space carving method
Computational Optimizations for Machine Learning
The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity
A Novel Android Botnet Detection System Using Image-Based and Manifest File Features
open access articleMalicious botnet applications have become a serious threat and are increasingly incorporating sophisticated detection avoidance techniques. Hence, there is a need for more effective mitigation approaches to combat the rise of Android botnets. Although the use of Machine Learning to detect botnets has been a focus of recent research efforts, several challenges remain. To overcome the limitations of using hand-crafted features for Machine-Learning-based detection, in this paper, we propose a novel mobile botnet detection system based on features extracted from images and a manifest file. The scheme employs a Histogram of Oriented Gradients and byte histograms obtained from images representing the app executable and combines these with features derived from the manifest files. Feature selection is then applied to utilize the best features for classification with Machine-Learning algorithms. The proposed system was evaluated using the ISCX botnet dataset, and the experimental results demonstrate its effectiveness with F1 scores ranging from 0.923 to 0.96 using popular Machine-Learning algorithms. Furthermore, with the Extra Trees model, up to 97.5% overall accuracy was obtained using an 80:20 train–test split, and 96% overall accuracy was obtained using 10-fold cross validation
An enhanced gated recurrent unit with auto-encoder for solving text classification problems
Classification has become an important task for categorizing documents
automatically based on their respective groups. Gated Recurrent Unit (GRU) is a
type of Recurrent Neural Networks (RNNs), and a deep learning algorithm that
contains update gate and reset gate. It is considered as one of the most efficient text
classification techniques, specifically on sequential datasets. However, GRU suffered
from three major issues when it is applied for solving the text classification
problems. The first drawback is the failure in data dimensionality reduction, which
leads to low quality solution for the classification problems. Secondly, GRU still has
difficulty in training procedure due to redundancy between update and reset gates.
The reset gate creates complexity and require high processing time. Thirdly, GRU
also has a problem with informative features loss in each recurrence during the
training phase and high computational cost. The reason behind this failure is due to a
random selection of features from datasets (or previous outputs), when applied in its
standard form. Therefore, in this research, a new model namely Encoder Simplified
GRU (ES-GRU) is proposed to reduce dimension of data using an Auto-Encoder
(AE). Accordingly, the reset gate is replaced with an update gate in order to reduce
the redundancy and complexity in the standard GRU. Finally, a Batch Normalization
method is incorporated in the GRU and AE for improving the performance of the
proposed ES-GRU model. The proposed model has been evaluated on seven
benchmark text datasets and compared with six baselines well-known multiclass text
classification approaches included standard GRU, AE, Long Short Term Memory,
Convolutional Neural Network, Support Vector Machine, and Naïve Bayes. Based
on various types of performance evaluation parameters, a considerable amount of
improvement has been observed in the performance of the proposed model as
compared to other standard classification techniques, and showed better effectiveness
and efficiency of the developed model
Advances in Image Processing, Analysis and Recognition Technology
For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
Free-text keystroke dynamics authentication with a reduced need for training and language independency
This research aims to overcome the drawback of the large amount of training data required
for free-text keystroke dynamics authentication. A new key-pairing method, which is based
on the keyboard’s key-layout, has been suggested to achieve that. The method extracts
several timing features from specific key-pairs. The level of similarity between a user’s
profile data and his or her test data is then used to decide whether the test data was provided
by the genuine user. The key-pairing technique was developed to use the smallest amount of
training data in the best way possible which reduces the requirement for typing long text in
the training stage. In addition, non-conventional features were also defined and extracted
from the input stream typed by the user in order to understand more of the users typing
behaviours. This helps the system to assemble a better idea about the user’s identity from the
smallest amount of training data. Non-conventional features compute the average of users
performing certain actions when typing a whole piece of text. Results were obtained from the
tests conducted on each of the key-pair timing features and the non-conventional features,
separately. An FAR of 0.013, 0.0104 and an FRR of 0.384, 0.25 were produced by the timing
features and non-conventional features, respectively. Moreover, the fusion of these two
feature sets was utilized to enhance the error rates. The feature-level fusion thrived to reduce
the error rates to an FAR of 0.00896 and an FRR of 0.215 whilst decision-level fusion
succeeded in achieving zero FAR and FRR. In addition, keystroke dynamics research suffers
from the fact that almost all text included in the studies is typed in English. Nevertheless, the
key-pairing method has the advantage of being language-independent. This allows for it to be
applied on text typed in other languages. In this research, the key-pairing method was applied
to text in Arabic. The results produced from the test conducted on Arabic text were similar to
those produced from English text. This proves the applicability of the key-pairing method on
a language other than English even if that language has a completely different alphabet and
characteristics. Moreover, experimenting with texts in English and Arabic produced results
showing a direct relation between the users’ familiarity with the language and the
performance of the authentication system
- …