12 research outputs found
Analysis Of Deep Learning Architecture In Classifying SNI Masks
In preparing for the new normal for COVID-19, every government agency, school and university will be required to comply with new regulations by the government, in which the government will oblige everyone who does activities outside the home to wear masks and practice physical distancing. This is one of the new habits that the government will familiarize with starting in 2020. Due to the ease with which the Covid-19 virus spreads. So the selection of a good mask is recommended good mask, namely a mask that follows the recommendations of the WHO at least 3 layers. The purpose of this study was to classify the types of SNI and non-SNI masks so that the presence of this SNI mask cluster monitoring system could improve security at locations that apply the use of masks and the masks used can function effectively to prevent the spread and spread of Covid-19, classification of research models it uses the InceptionV3, Resnet50, InceptionV2, AlexNet and DenseNet architectures. The results of trials that have been carried out by the InceptionV3 architecture have the most optimal accuracy with loss values of 3.4889 and 0.9894 (98.94%)
Classification using semantic feature and machine learning: Land-use case application
Land cover classification has interested recent works especially for deforestation, urban are monitoring and agricultural land use. Traditional classification approaches have limited accuracy especially for non-heterogeneous land cover. Thus, using machine may improve the classification accuracy. The presented paper deals with the land-use scene recognition on very high-resolution remote sensing imagery. We proposed a new framework based on semantic features, handcrafted features and machine learning classifiers decisions. The method starts by semantic feature extraction using a convolutional neural network. Handcraft features are also extracted based on color and multi-resolution characteristics. Then, the classification stage is processed by three learning machine algorithms. The final classification result performed by majority vote algorithm. The idea behind is to take advantages from semantic features and handcrafted features. The second scope is to use the decision fusion to enhance the classification result. Experimentation results show that the proposed method provides good accuracy and trustable tool for land use image identification
Evaluation of CNN Models with Fashion MNIST Data
The work seeks to evaluate the performance of four CNNs with respect to Fashion MNIST data set. Fashion MNIST is a dataset of images consisting of 70000 28*28 grayscale images, associated with a label from 10 classes. In this report, the accuracy of four popular CNN models that are LeNet-5, AlexNet, VGG-16 and ResNet for classifying MNIST-fashion data revealed that ResNet was the best suited for the selected dataset. The training process has been coded with Tensorflow. After the result accuracy improving, we could use the new model to the fashion company that can help the fashion company more accurately classify clothing. Moreover you could build your own closet online for your fashion
A computationally efficient crack detection approach based on deep learning assisted by stockwell transform and linear discriminant analysis
This paper presents SpeedyNet, a computationally efficient crack detection method. Rather than using a computationally demanding convolutional neural network (CNN), this approach made use of a simple neural network with a shallow architecture augmented by a 2D Stockwell transform for feature transformation and linear discriminant analysis for feature reduction. The approach was employed to classify images with minute cracks under three simulated noisy conditions. Using time–frequency image transformation, feature conditioning and a fast deep learning-based classifier, this method performed better in terms of speed, accuracy and robustness compared to other image classifiers. The performance of SpeedyNet was compared to that of two popular pre-trained CNN models, Xception and GoogleNet, and the results demonstrated that SpeedyNet was superior in both classification accuracy and computational speed. A synthetic efficiency index was then defined for further assessment. Compared to GoogleNet and the Xception models, SpeedyNet enhanced classification efficiency at least sevenfold. Furthermore, SpeedyNet’s reliability was demonstrated by its robustness and stability when faced with network parameter and input image uncertainties including batch size, repeatability, data size and image dimensions
Pre-Trained AlexNet Architecture with Pyramid Pooling and Supervision for High Spatial Resolution Remote Sensing Image Scene Classification
The rapid development of high spatial resolution (HSR) remote sensing imagery techniques not only provide a considerable amount of datasets for scene classification tasks but also request an appropriate scene classification choice when facing with finite labeled samples. AlexNet, as a relatively simple convolutional neural network (CNN) architecture, has obtained great success in scene classification tasks and has been proven to be an excellent foundational hierarchical and automatic scene classification technique. However, current HSR remote sensing imagery scene classification datasets always have the characteristics of small quantities and simple categories, where the limited annotated labeling samples easily cause non-convergence. For HSR remote sensing imagery, multi-scale information of the same scenes can represent the scene semantics to a certain extent but lacks an efficient fusion expression manner. Meanwhile, the current pre-trained AlexNet architecture lacks a kind of appropriate supervision for enhancing the performance of this model, which easily causes overfitting. In this paper, an improved pre-trained AlexNet architecture named pre-trained AlexNet-SPP-SS has been proposed, which incorporates the scale pooling—spatial pyramid pooling (SPP) and side supervision (SS) to improve the above two situations. Extensive experimental results conducted on the UC Merced dataset and the Google Image dataset of SIRI-WHU have demonstrated that the proposed pre-trained AlexNet-SPP-SS model is superior to the original AlexNet architecture as well as the traditional scene classification methods
Identification of new particle formation events with deep learning
New particle formation (NPF) in the atmosphere is globally an
important source of climate relevant aerosol particles. Occurrence of NPF
events is typically analyzed by researchers manually from particle size
distribution data day by day, which is time consuming and the classification
of event types may be inconsistent. To get more reliable and consistent
results, the NPF event analysis should be automatized. We have developed an
automatic analysis method based on deep learning, a subarea of machine
learning, for NPF event identification. To our knowledge, this is the first
time that a deep learning method, i.e., transfer learning of a convolutional
neural network (CNN), has successfully been used to automatically classify
NPF events into different classes directly from particle size distribution
images, similarly to how the researchers carry out the manual classification. The
developed method is based on image analysis of particle size distributions
using a pretrained deep CNN, named AlexNet, which was transfer learned to
recognize NPF event classes (six different types). In transfer learning, a
partial set of particle size distribution images was used in the training
stage of the CNN and the rest of the images for testing the success of the
training. The method was utilized for a 15-year-long dataset measured at San
Pietro Capofiume (SPC) in Italy. We studied the performance of the training
with different training and testing of image number ratios as well as with
different regions of interest in the images. The results show that clear
event (i.e., classes 1 and 2) and nonevent days can be identified with an
accuracy of ca. 80 %, when the CNN classification is compared with that
of an expert, which is a good first result for automatic NPF event analysis.
In the event classification, the choice between different event classes is
not an easy task even for trained researchers, and thus overlapping or confusion
between different classes occurs. Hence, we cross-validated the learning
results of CNN with the expert-made classification. The results show that the
overlapping occurs, typically between the adjacent or similar type of classes,
e.g., a manually classified Class 1 is categorized mainly into classes 1 and
2 by CNN, indicating that the manual and CNN classifications are very
consistent
for most of the days. The classification would be more consistent, by
both human and CNN, if only two different classes are used for event days
instead of three classes. Thus, we recommend that in the future analysis,
event days should be categorized into classes of quantifiable (i.e., clear
events, classes 1 and 2) and nonquantifiable (i.e., weak events, Class
3). This would better describe the difference of those classes: both
formation and growth rates can be determined for quantifiable days but not
both for nonquantifiable days. Furthermore, we investigated more deeply the
days that are classified as clear events by experts and recognized as
nonevents by the CNN and vice versa. Clear misclassifications seem to occur
more commonly in manual analysis than in the CNN categorization, which is
mostly due to the inconsistency in the human-made classification or errors in
the booking of the event class. In general, the automatic CNN classifier has
a better reliability and repeatability in NPF event classification than
human-made classification and, thus, the transfer-learned pretrained CNNs
are powerful tools to analyze long-term datasets. The developed NPF event
classifier can be easily utilized to analyze any long-term datasets more
accurately and consistently, which helps us to understand in detail
aerosol–climate interactions and the long-term effects of climate change on
NPF in the atmosphere. We encourage researchers to use the model in other
sites. However, we suggest that the CNN should be transfer learned again for
new site data with a minimum of ca. 150 figures per class to obtain good
enough classification results, especially if the size distribution evolution
differs from training data. In the future, we will utilize the method for
data from other sites, develop it to analyze more parameters and evaluate how
successfully CNN could be trained with synthetic NPF event data.</p
Desarrollo de un Sistema Inteligente basado en redes neuronales convolucionales para el control de asistencias en las organizaciones privadas
El objetivo de esta investigación fue demostrar la eficacia del Sistema
Inteligente basado en redes neuronales convolucionales para el control de
asistencia de estudiantes en organizaciones privadas. Esta investigación fue de tipo
aplicada, de enfoque cuantitativo, también de diseño preexperimental con una sola
medición. Además, se consideró 3 indicadores: Facilidad de registro, tiempo
promedio, pérdida de tiempo y precisión. Estos indicadores fueron evaluados en el
transcurso de 24 días calendario a través de fichas de registro, los cuales fueron
elaborados y evaluados por los investigadores del presente trabajo y fueron
validados a través de expertos. Los resultados obtenidos, determinaron que el
sistema inteligente favoreció significativamente en los procesos de toma de
asistencia. Los resultados de los modelos y algoritmos utilizados arrojaron un
promedio de 91,198% de acierto para las pruebas realizadas y este promedio
oscila, con un nivel de confianza del 95%, entre 88,81% y un 93,58% de acierto