2,246 research outputs found
Visual Eureka Navigating Images Through Textual Queries
Within the domain of text extraction technologies, progress has been somewhat constrained, notwithstanding notable instances such as Google Lens, which proficiently extracts text from images. A conspicuous gap persists, however, in the availability of software tailored for the reciprocal task of searching images based on their textual content. Our pioneering conceptual framework introduces a transformative paradigm shift—a software solution engineered for image retrieval through text search. The crux of our technical innovation lies in the systematic incorporation of metadata as a repository for textual data linked to images. Through advanced text extraction algorithms, including robust optical character recognition methods, we decipher and store relevant textual information in this metadata. This meticulous indexing facilitates a highly efficient search mechanism, allowing users to query images based on specific text-related parameters. The user interface seamlessly integrates these functionalities, providing an intuitive platform for users to input text queries and retrieve images with unprecedented precision. Scalability and performance optimization measures ensure the system's adaptability to growing datasets, promising not only a redefined utility of image search but also a significant advancement in user convenience and operational efficiency within the visual data retrieval landscape
Recommended from our members
CORN LEAF DISEASE PREDICTION USING DEEP LEARNING
Corn is a widely cultivated agricultural product, serving as a cornerstone in food production and industrial applications such as biofuels, playing a crucial role in the global economy. This study explores the application of deep transfer learning to accurately classify major corn diseases from leaf images, aiming to enhance disease management strategies for improved agricultural productivity and sustainability. The customized Dense net 201 model achieved 95% prediction accuracy on an untrained dataset. Data augmentation improved the model’s accuracy from 91% to 95%. This supervised learning approach enhances the model’s performance by increasing the diversity, leading to better generalization and accuracy. Experimentation of the four optimizers, namely Adagrad, SGD, AdaDelta, and Adam, achieved the same accuracy (95%).
Increasing the data by a significant margin leads to a considerable enhancement of the model from 91% to 95% and thus serves as evidence of the effectiveness of the proposed method in improving the model performance, therefore improving the
generalization samples for better training samples. It is testified that even the optimizer selection influences the accuracy rate. AdaDelta and Adagard achieved the highest accuracy at 95%, emphasizing the importance of selecting the right optimizer for optimal performance. The optimized deep learning model achieved 95% accuracy in detecting and classifying corn leaf diseases, benefiting farmers in disease identification
An Optimized Machine Learning and Deep Learning Framework for Facial and Masked Facial Recognition
In this study, we aimed to find an optimized approach to improving facial and masked facial recognition using machine learning and deep learning techniques. Prior studies only used a single machine learning model for classification and did not report optimal parameter values. In contrast, we utilized a grid search with hyperparameter tuning and nested cross-validation to achieve better results during the verification phase. We performed experiments on a large dataset of facial images with and without masks. Our findings showed that the SVM model with hyperparameter tuning had the highest accuracy compared to other models, achieving a recognition accuracy of 0.99912. The precision values for recognition without masks and with masks were 0.99925 and 0.98417, respectively. We tested our approach in real-life scenarios and found that it accurately identified masked individuals through facial recognition. Furthermore, our study stands out from others as it incorporates hyperparameter tuning and nested cross-validation during the verification phase to enhance the model's performance, generalization, and robustness while optimizing data utilization. Our optimized approach has potential implications for improving security systems in various domains, including public safety and healthcare. Doi: 10.28991/ESJ-2023-07-04-010 Full Text: PD
Optimizing Alzheimer's disease prediction using the nomadic people algorithm
The problem with using microarray technology to detect diseases is that not each is analytically necessary. The presence of non-essential gene data adds a computing load to the detection method. Therefore, the purpose of this study is to reduce the high-dimensional data size by determining the most critical genes involved in Alzheimer's disease progression. A study also aims to predict patients with a subset of genes that cause Alzheimer's disease. This paper uses feature selection techniques like information gain (IG) and a novel metaheuristic optimization technique based on a swarm’s algorithm derived from nomadic people’s behavior (NPO). This suggested method matches the structure of these individuals' lives movements and the search for new food sources. The method is mostly based on a multi-swarm method; there are several clans, each seeking the best foraging opportunities. Prediction is carried out after selecting the informative genes of the support vector machine (SVM), frequently used in a variety of prediction tasks. The accuracy of the prediction was used to evaluate the suggested system's performance. Its results indicate that the NPO algorithm with the SVM model returns high accuracy based on the gene subset from IG and NPO methods
Application of expert systems in project management decision aiding
The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method
Establishing A Systematic Outline for Operational Excellence Model and Proposing a Comprehensive Model
Manufacturing organizations adopt operational excellence strategies to meet performance targets. While Lean Manufacturing (LM) is widely used in OPEX and is supported by many industries case studies, but faces two major challenges. First, there is an absence of a standard framework to implement LM. Second, the framework does not explicitly address employee engagement and quality of life in the continuous improvement process. This has led to low reported levels of sustainability of LM. People-Centric Operational Excellence (PCOM) has been presented as a response to challenges in LM. PCOM comprises four modules: problem definition, design of metrics, design of reliability-based solutions, and alignment of solutions with employee quality of life. However, PCOM is not supported by an implementation template and case studies for the framework are not well documented in the literature. This paper compares PCOM with LM using literature-driven criteria, develops an implementation-ready template for PCOM, and documents case studies in PCOM for the manufacturing sector. Moreover, this paper presents the conceptual basis for a new model, Comprehensive PCOM or CPCOM, that combines the strengths of LM and PCOM and provides an initial roadmap for its implementation
Enhancing Rice Plant Disease Recognition and Classification Using Modified Sand Cat Swarm Optimization with Deep Learning
Rice plant diseases play a critical challenge to agricultural productivity and food safety. Timely and accurate recognition and classification of these ailments are vital for efficient management of the disease. Classifying and recognizing rice plant disease by implementing Deep Learning (DL) has emerged as a powerful approach to tackle the challenges associated with automated disease diagnosis in rice crops. DL, a subfield of artificial intelligence, concentrates to train neural networks with several layers for automated learning of the complex patterns and illustrations from data. In the context of rice plant diseases, DL methods can effectually extract meaningful features from images and accurately classify them into different disease categories. Therefore, this study introduces a new Modified Sand Cat Swarm Optimization with Deep Learning based Rice Plant Disease Detection and Classification (MSCSO-DLRPDC) technique. The main objective of the MSCSO-DLRPDC technique focalize on the automated classification and recognition of rice plant ailments. To achieve this, the MSCSO-DLRPDC methodology involves two levels of pre-processing such as median filter-based noise removal and CLAHE-based contrast enhancement. Besides, Multi-Layer ShuffleNet with Depthwise Separable Convolution (MLS-DSC) methodology is utilized for feature extraction purposes. Moreover, the Multi-Head Attention-based Long Short-Term Memory (MHA-LSTM) methodology is utilized for the process of rice plant disease detection. At last, the MSCSO method is utilized for the tuning process of the MHA-LSTM approach. The MSCSO approach inspired by the collective behaviour of sand cats and the mutation operator, is implemented for optimizing the parameters of the MHA-LSTM network. To demonstrate the enhanced accomplishment of the MSCSO-DLRPDC method, a broad set of simulations were carried out. The extensive outputs show the greater accomplishment of the MSCSO-DLRPDC method over other methods. The proposed approach has the capability in assisting farmers and agricultural stakeholders in effectively managing rice plant diseases, contributing to improved crop yield and sustainable agricultural practices
Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence
The evolution of cybersecurity has spurred the emergence of autonomous threat
hunting as a pivotal paradigm in the realm of AI-driven threat intelligence.
This review navigates through the intricate landscape of autonomous threat
hunting, exploring its significance and pivotal role in fortifying cyber
defense mechanisms. Delving into the amalgamation of artificial intelligence
(AI) and traditional threat intelligence methodologies, this paper delineates
the necessity and evolution of autonomous approaches in combating contemporary
cyber threats. Through a comprehensive exploration of foundational AI-driven
threat intelligence, the review accentuates the transformative influence of AI
and machine learning on conventional threat intelligence practices. It
elucidates the conceptual framework underpinning autonomous threat hunting,
spotlighting its components, and the seamless integration of AI algorithms
within threat hunting processes.. Insightful discussions on challenges
encompassing scalability, interpretability, and ethical considerations in
AI-driven models enrich the discourse. Moreover, through illuminating case
studies and evaluations, this paper showcases real-world implementations,
underscoring success stories and lessons learned by organizations adopting
AI-driven threat intelligence. In conclusion, this review consolidates key
insights, emphasizing the substantial implications of autonomous threat hunting
for the future of cybersecurity. It underscores the significance of continual
research and collaborative efforts in harnessing the potential of AI-driven
approaches to fortify cyber defenses against evolving threats
Recommended from our members
Enhancing YouTube Spam Detection
This culminating experience project investigated various methods for enhancing spam detection on YouTube, a prevalent issue impacting user experience and platform integrity. The research questions addressed were: Q1) How do different spam detection methods compare regarding robustness, efficiency, and accuracy? Q2) What role do deep learning approaches like RNNs and CNNs play in improving spam comment identification? Q3) What are the unique benefits of using deep learning models for spam comment identification on YouTube? Q4) How can machine learning models be optimized for real-time spam detection on YouTube?
The study gave adequate findings that explained each research question. In the case of (Q1), while algorithms like the Naïve Bayes and Logistic Regression offered precision in identifying spam emails, the models have proven ineffectual at adapting to new forms of spam and constant enhancement in spam techniques, deep learning algorithms like the CNN and RNN offered high accuracy through their robustness due to the models\u27 abilities of feature extraction independently from the text data. The results shown in (Q2) indicate that RNNs and CNNs are critical in transforming the level of spam detection by addressing the problem of semantic meaning and temporal relationships in comments and surpassing traditional methods. Concerning (Q3), it was pointed out that deep learning models are the most accurate, scalable, and resistant to false negatives when identifying spam comments on the videos hosted on YouTube, which helps regain users\u27 trust and enhance the platform\u27s security as the traffic continues to grow. (Q4) was focused on advancing machine learning models for real-time processing, using methods such as model pruning and distribution.
The findings were as follows: (Q1) found that although conventional approaches are efficient at meeting accurate results, deep learning models are highly effective in dealing with the changes in spam strategies. (Q2) pointed out that RNNs and CNNs contribute immensely to discovering spam in SM platforms due to their raw power in NLP and pattern recognition. (Q3) established that the deep learning models\u27 accuracy, scalability, and adaptability, including CNN and RNN, are beneficial in identifying spam on YouTube due to their effectiveness in tackling the ever-evolving spam tactics. (Q4) It has emerged that the fine-tuning of machine learning models is imperative for scaling up the approaches by deploying high-end methodologies for real-time spam detection, which subserves the daunting task of training the algorithms to deal with the flood of user-generated content in the context of YouTube.
Areas of further study include analyzing other complex natural language processing methods combined with classifiers for better spam identification, improving the computational time for multi-modal learning for spam comment detection, and considering federated learning for real-time spam identification on platforms such as YouTube. These research directions are being carried out to boost the existing permutations and improve the permeate spam detection technologies in Information Systems so that they can be efficient, effective, and highly accurate systems capable of coping with the newly emerged spam technologies in flexible, transparent, and effective ways
- …