14 research outputs found
SSM-Net for Plants Disease Identification in Low Data Regime
Plant disease detection is an essential factor in increasing agricultural
production. Due to the difficulty of disease detection, farmers spray various
pesticides on their crops to protect them, causing great harm to crop growth
and food standards. Deep learning can offer critical aid in detecting such
diseases. However, it is highly inconvenient to collect a large volume of data
on all forms of the diseases afflicting a specific plant species. In this
paper, we propose a new metrics-based few-shot learning SSM net architecture,
which consists of stacked siamese and matching network components to address
the problem of disease detection in low data regimes. We demonstrated our
experiments on two datasets: mini-leaves diseases and sugarcane diseases
dataset. We have showcased that the SSM-Net approach can achieve better
decision boundaries with an accuracy of 92.7% on the mini-leaves dataset and
94.3% on the sugarcane dataset. The accuracy increased by ~10% and ~5%
respectively, compared to the widely used VGG16 transfer learning approach.
Furthermore, we attained F1 score of 0.90 using SSM Net on the sugarcane
dataset and 0.91 on the mini-leaves dataset. Our code implementation is
available on Github: https://github.com/shruti-jadon/PlantsDiseaseDetection.Comment: 5 pages, 7 Figure
A Comprehensive Survey of Regression Based Loss Functions for Time Series Forecasting
Time Series Forecasting has been an active area of research due to its many
applications ranging from network usage prediction, resource allocation,
anomaly detection, and predictive maintenance. Numerous publications published
in the last five years have proposed diverse sets of objective loss functions
to address cases such as biased data, long-term forecasting, multicollinear
features, etc. In this paper, we have summarized 14 well-known regression loss
functions commonly used for time series forecasting and listed out the
circumstances where their application can aid in faster and better model
convergence. We have also demonstrated how certain categories of loss functions
perform well across all data sets and can be considered as a baseline objective
function in circumstances where the distribution of the data is unknown. Our
code is available at GitHub:
https://github.com/aryan-jadon/Regression-Loss-Functions-in-Time-Series-Forecasting-Tensorflow.Comment: 13 pages, 23 figure
Unsupervised video summarization framework using keyframe extraction and video skimming
Video is one of the robust sources of information and the consumption of
online and offline videos has reached an unprecedented level in the last few
years. A fundamental challenge of extracting information from videos is a
viewer has to go through the complete video to understand the context, as
opposed to an image where the viewer can extract information from a single
frame. Apart from context understanding, it almost impossible to create a
universal summarized video for everyone, as everyone has their own bias of
keyframe, e.g; In a soccer game, a coach person might consider those frames
which consist of information on player placement, techniques, etc; however, a
person with less knowledge about a soccer game, will focus more on frames which
consist of goals and score-board. Therefore, if we were to tackle problem video
summarization through a supervised learning path, it will require extensive
personalized labeling of data. In this paper, we attempt to solve video
summarization through unsupervised learning by employing traditional
vision-based algorithmic methodologies for accurate feature extraction from
video frames. We have also proposed a deep learning-based feature extraction
followed by multiple clustering methods to find an effective way of summarizing
a video by interesting key-frame extraction. We have compared the performance
of these approaches on the SumMe dataset and showcased that using deep
learning-based feature extraction has been proven to perform better in case of
dynamic viewpoint videos.Comment: 5 pages, 3 figures. Technical Repor
Improving Siamese Networks for One Shot Learning using Kernel Based Activation functions
The lack of a large amount of training data has always been the constraining
factor in solving a lot of problems in machine learning, making One Shot
Learning one of the most intriguing ideas in machine learning. It aims to learn
information about object categories from one, or only a few training examples.
This process of learning in deep learning is usually accomplished by proper
objective function, i.e; loss function and embeddings extraction i.e;
architecture. In this paper, we discussed about metrics based deep learning
architectures for one shot learning such as Siamese neural networks and present
a method to improve on their accuracy using Kafnets (kernel-based
non-parametric activation functions for neural networks) by learning proper
embeddings with relatively less number of epochs. Using kernel activation
functions, we are able to achieve strong results which exceed those of ReLU
based deep learning models in terms of embeddings structure, loss convergence,
and accuracy.Comment: 15 pages, 8 figure
Schema Matching using Machine Learning
Schema Matching is a method of finding attributes that are either similar to
each other linguistically or represent the same information. In this project,
we take a hybrid approach at solving this problem by making use of both the
provided data and the schema name to perform one to one schema matching and
introduce the creation of a global dictionary to achieve one to many schema
matching. We experiment with two methods of one to one matching and compare
both based on their F-scores, precision, and recall. We also compare our method
with the ones previously suggested and highlight differences between them.Comment: 7 pages, 2 figures, 2 table