3,831 research outputs found
A Review and Performance Analysis of Image Edge Detection Algorithms
Edge detection is the fundamental operation of digital image processing and applied in many fields like industrial, medical, satellite, agriculture etc. According to this growth of edge detection applications, many researchers and scholars are interested to develop the edge detection algorithm by using various techniques. This paper illustrates the review for what are the novel techniques are used for the edge detection, which operators are mostly used by them and how they get the accurate results to compare with existing methods. It also discussing the performance analysis of most commonly used edge detection operators such as Canny, Laplacian Gaussian (LoG), Sobel, Prewitt and Roberts,. Finally the accuracy, PSNR (Peak Signal to Noise Ratio) and execution time are tabulated and realize the most precious and fast computed edge detection method is uncovered
A Model for Collective Dynamics in Ant Raids
Ant raiding, the process of identifying and returning food to the nest or
bivouac, is a fascinating example of collective motion in nature. During such
raids ants lay pheromones to form trails for others to find a food source. In
this work a coupled PDE/ODE model is introduced to study ant dynamics and
pheromone concentration. The key idea is the introduction of two forms of ant
dynamics: foraging and returning, each governed by different environmental and
social cues. The model accounts for all aspects of the raiding cycle including
local collisional interactions, the laying of pheromone along a trail, and the
transition from one class of ants to another. Through analysis of an order
parameter measuring the orientational order in the system, the model shows
self-organization into a collective state consisting of lanes of ants moving in
opposite directions as well as the transition back to the individual state once
the food source is depleted matching prior experimental results. This indicates
that in the absence of direct communication ants naturally form an efficient
method for transporting food to the nest/bivouac. The model exhibits a
continuous kinetic phase transition in the order parameter as a function of
certain system parameters. The associated critical exponents are found,
shedding light on the behavior of the system near the transition.Comment: Preprint Version, 30 pgs., 18 figures, complete version with
supplementary movies to appear in Journal of Mathematical Biology (Springer
Automated Plant Disease Recognition using Tasmanian Devil Optimization with Deep Learning Model
Plant diseases have devastating effects on crop production, contributing to major economic loss and food scarcity. Timely and accurate recognition of plant ailments is vital to effectual disease management and keeping further spread. Plant disease classification utilizing Deep Learning (DL) has gained important attention recently because of its potential to correct and affect the detection of plant diseases. DL approaches, particularly Convolutional Neural Networks (CNNs) demonstrate that extremely effective in capturing intricate patterns and features in plant leaf images, allowing correct disease classification. In this article, a Tasmanian Devil Optimization with Deep Learning Enabled Plant Disease Recognition (TDODL-PDR) technique is proposed for effective crop management. The TDODL-PDR technique derives feature vectors utilizing the Multi-Direction and Location Distribution of Pixels in Trend Structure (MDLDPTS) descriptor. Besides, the deep Bidirectional Long Short-Term Memory (BiLSTM) approach gets exploited for the plant disease recognition. Finally, the TDO method can be executed to optimize the hyperparameters of the BiLSTM approach. The TDO method inspired by the foraging behaviour of Tasmanian Devils (TDs) effectively explores the parameter space and improves the model's performance. The experimental values stated that the TDODL-PDR model successfully distinguishes healthy plants from diseased ones and accurately classifies different disease types. The automated TDODL-PDR model offers a practical and reliable solution for early disease detection in crops, enabling farmers to take prompt actions to mitigate the spread and minimize crop losses
KFREAIN: Design of A Kernel-Level Forensic Layer for Improving Real-Time Evidence Analysis Performance in IoT Networks
An exponential increase in number of attacks in IoT Networks makes it essential to formulate attack-level mitigation strategies. This paper proposes design of a scalable Kernel-level Forensic layer that assists in improving real-time evidence analysis performance to assist in efficient pattern analysis of the collected data samples. It has an inbuilt Temporal Blockchain Cache (TBC), which is refreshed after analysis of every set of evidences. The model uses a multidomain feature extraction engine that combines lightweight Fourier, Wavelet, Convolutional, Gabor, and Cosine feature sets that are selected by a stochastic Bacterial Foraging Optimizer (BFO) for identification of high variance features. The selected features are processed by an ensemble learning (EL) classifier that use low complexity classifiers reducing the energy consumption during analysis by 8.3% when compared with application-level forensic models. The model also showcased 3.5% higher accuracy, 4.9% higher precision, and 4.3% higher recall of attack-event identification when compared with standard forensic techniques. Due to kernel-level integration, the model is also able to reduce the delay needed for forensic analysis on different network types by 9.5%, thus making it useful for real-time & heterogenous network scenarios
Recommended from our members
A survey of swarm intelligence for dynamic optimization: algorithms and applications
Swarm intelligence (SI) algorithms, including ant colony optimization, particle swarm optimization, bee-inspired algorithms, bacterial foraging optimization, firefly algorithms, fish swarm optimization and many more, have been proven to be good methods to address difficult optimization problems under stationary environments. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. For such dynamic optimization problems (DOPs), it is difficult for a conventional SI algorithm to track the changing optimum once the algorithm has converged on a solution. In the last two decades, there has been a growing interest of addressing DOPs using SI algorithms due to their adaptation capabilities. This paper presents a broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications. In addition, this paper focuses on the enhancement strategies integrated in SI algorithms to address dynamic changes, the performance measurements and benchmark generators used in SIDO. Finally, some considerations about future directions in the subject are given
Performance Analysis of Different Optimization Algorithms for Multi-Class Object Detection
Object recognition is a significant approach employed for recognizing suitable objects from the image. Various improvements, particularly in computer vision, are probable to diagnose highly difficult tasks with the assistance of local feature detection methodologies. Detecting multi-class objects is quite challenging, and many existing researches have worked to enhance the overall accuracy. But because of certain limitations like higher network loss, degraded training ability, improper consideration of features, less convergent and so on. The proposed research introduced a hybrid convolutional neural network (H-CNN) approach to overcome these drawbacks. The collected input images are pre-processed initially through Gaussian filtering to eradicate the noise and enhance the image quality. Followed by image pre-processing, the objects present in the images are localized using Grid Guided Localization (GGL). The effective features are extracted from the localized objects using the AlexNet model. Different objects are classified by replacing the concluding softmax layer of AlexNet with Support Vector Regression (SVR) model. The losses present in the network model are optimized using the Improved Grey Wolf (IGW) optimization procedure. The performances of the proposed model are analyzed using PYTHON. Various datasets are employed, including MIT-67, PASCAL VOC2010, Microsoft (MS)-COCO and MSRC. The performances are analyzed by varying the loss optimization algorithms like improved Particle Swarm Optimization (IPSO), improved Genetic Algorithm (IGA), and improved dragon fly algorithm (IDFA), improved simulated annealing algorithm (ISAA) and improved bacterial foraging algorithm (IBFA), to choose the best algorithm. The proposed accuracy outcomes are attained as PASCAL VOC2010 (95.04%), MIT-67 dataset (96.02%), MSRC (97.37%), and MS COCO (94.53%), respectively
Reliable and Automatic Recognition of Leaf Disease Detection using Optimal Monarch Ant Lion Recurrent Learning
Around 7.5 billion people worldwide depend on agriculture production for their livelihood, making it an essential component in keeping life alive on the planet. Negative impacts are being caused on the agroecosystem due to the rapid increase in the use of chemicals to combat plant diseases. These chemicals include fungicides, bactericides, and insecticides. Both the quantity and quality of the output are impacted when there is a high-scale prevalence of diseases in crops. Plant diseases provide a significant obstacle for the agricultural industry, which has a negative impact on the growth of plants and the output of crops. The problem of early detection and diagnosis of diseases can be solved for the benefit of the farming community by employing a method that is both quick and reliable regularly. This article proposes a model for the detection and diagnosis of leaf infection called the Automatic Optimal Monarch AntLion Recurrent Learning (MALRL) model, which attains a greater authenticity. The design of a hybrid version of the Monarch Butter Fly optimization algorithm and the AntLion Optimization Algorithm is incorporated into the MALRL technique that has been proposed. In the leaf image, it is used to determine acceptable aspects of impacted regions. After that, the optimal characteristics are used to aid the Long Short Term Neural Network (LSTM) classifier to speed up the process of lung disease categorization. The experiment's findings are analyzed and compared to those of ANN, CNN, and DNN. The proposed method was successful in achieving a high level of accuracy when detecting leaf disease for images of healthy leaves in comparison to other conventional methods
- …