8 research outputs found

    Effect of Agricultural Phragmites, Rice Straw, Rice Husk, and Sugarcane Bagasse Ashes on the Properties and Microstructure of High-Strength Self-Compacted Self-Curing Concrete

    No full text
    Each year, billions of tons of agricultural waste are generated globally. Egypt, being an agriculturally centered nation, faces significant challenges in disposing of this waste and coping with self-germinating plants that negatively impact agriculture. The common practice among farmers is to burn the waste, which exacerbates environmental concerns. With the global shift towards eco-friendly concrete, this study explores the utilization of agricultural waste ashes, particularly those abundant in Egypt and numerous other countries worldwide. Among the researched waste ashes are Phragmites ash (PGA), sugarcane bagasse ash (SBA), rice husk ash (RHA), and rice straw ash (RSA). This investigation examines the impact of partially substituting cement with varying ash percentages from these wastes on the characteristics and properties of fresh and hardened high-strength self-compacting self-curing concrete (HSSCSCC). The findings indicate the potential applicability of these ashes in producing HSSCSCC, specifically highlighting the promising outcome of PG ash, which exhibited favorable results as a new type of natural ash suitable for the concrete industry

    Metaheuristics Optimization with Deep Learning Enabled Automated Image Captioning System

    No full text
    Image captioning is a popular topic in the domains of computer vision and natural language processing (NLP). Recent advancements in deep learning (DL) models have enabled the improvement of the overall performance of the image captioning approach. This study develops a metaheuristic optimization with a deep learning-enabled automated image captioning technique (MODLE-AICT). The proposed MODLE-AICT model focuses on the generation of effective captions to the input images by using two processes involving encoding unit and decoding unit. Initially, at the encoding part, the salp swarm algorithm (SSA), with a HybridNet model, is utilized to generate effectual input image representation using fixed-length vectors, showing the novelty of the work. Moreover, the decoding part includes a bidirectional gated recurrent unit (BiGRU) model used to generate descriptive sentences. The inclusion of an SSA-based hyperparameter optimizer helps in attaining effectual performance. For inspecting the enhanced performance of the MODLE-AICT model, a series of simulations were carried out, and the results are examined under several aspects. The experimental values suggested the betterment of the MODLE-AICT model over recent approaches

    Metaheuristics Optimization with Deep Learning Enabled Automated Image Captioning System

    No full text
    Image captioning is a popular topic in the domains of computer vision and natural language processing (NLP). Recent advancements in deep learning (DL) models have enabled the improvement of the overall performance of the image captioning approach. This study develops a metaheuristic optimization with a deep learning-enabled automated image captioning technique (MODLE-AICT). The proposed MODLE-AICT model focuses on the generation of effective captions to the input images by using two processes involving encoding unit and decoding unit. Initially, at the encoding part, the salp swarm algorithm (SSA), with a HybridNet model, is utilized to generate effectual input image representation using fixed-length vectors, showing the novelty of the work. Moreover, the decoding part includes a bidirectional gated recurrent unit (BiGRU) model used to generate descriptive sentences. The inclusion of an SSA-based hyperparameter optimizer helps in attaining effectual performance. For inspecting the enhanced performance of the MODLE-AICT model, a series of simulations were carried out, and the results are examined under several aspects. The experimental values suggested the betterment of the MODLE-AICT model over recent approaches

    Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis

    No full text
    A brain tumor (BT) is an abnormal development of brain cells that causes damage to the nerves and blood vessels. An accurate and early diagnosis of BT is important to prevent future complications. Precise segmentation of the BT provides a basis for surgical and planning treatment to physicians. Manual detection utilizing MRI images is computationally difficult. Due to significant variation in their structure and location, viz., ambiguous boundaries and irregular shapes, computerized tumor diagnosis is still a challenging task. The application of a convolutional neural network (CNN) helps radiotherapists categorize the types of BT from magnetic resonance images (MRI). This study designs an evolutional algorithm with a deep learning-driven brain tumor MRI image classification (EADL-BTMIC) model. The presented EADL-BTMIC model aims to accurately recognize and categorize MRI images to identify BT. The EADL-BTMIC model primarily applies bilateral filtering (BF) based noise removal and skull stripping as a pre-processing stage. In addition, the morphological segmentation process is carried out to determine the affected regions in the image. Moreover, sooty tern optimization (STO) with the Xception model is exploited for feature extraction. Furthermore, the attention-based long short-term memory (ALSTM) technique is exploited for the classification of BT into distinct classes. To portray the increased performance of the EADL-BTMIC model, a series of simulations were carried out on the benchmark dataset. The experimental outcomes highlighted the enhancements of the EADL-BTMIC model over recent models

    Swarm Intelligence with Deep Transfer Learning Driven Aerial Image Classification Model on UAV Networks

    No full text
    Nowadays, unmanned aerial vehicles (UAVs) have gradually attracted the attention of many academicians and researchers. The UAV has been found to be useful in variety of applications, such as disaster management, intelligent transportation system, wildlife monitoring, and surveillance. In UAV aerial images, learning effectual image representation was central to scene classifier method. The previous approach to the scene classification method depends on feature coding models with lower-level handcrafted features or unsupervised feature learning. The emergence of convolutional neural network (CNN) is developing image classification techniques more effectively. Due to the limited resource in UAVs, it can be difficult to fine-tune the hyperparameter and the trade-offs amongst computation complexity and classifier results. This article focuses on the design of swarm intelligence with deep transfer learning driven aerial image classification (SIDTLD-AIC) model on UAV networks. The presented SIDTLD-AIC model involves the proper identification and classification of images into distinct kinds. For accomplishing this, the presented SIDTLD-AIC model follows a feature extraction module using RetinaNet model in which the hyperparameter optimization process is performed by the use of salp swarm algorithm (SSA). In addition, a cascaded long short term memory (CLSTM) model is executed for classifying the aerial images. At last, seeker optimization algorithm (SOA) is applied as a hyperparameter optimizer of the CLSTM model and thereby results in enhanced classification accuracy. To assure the better performance of the SIDTLD-AIC model, a wide range of simulations are implemented and the outcomes are investigated in many aspects. The comparative study reported the better performance of the SIDTLD-AIC model over recent approaches

    DNA fingerprinting, biological and chemical investigation of certain <i>Yucca</i> species

    No full text
    <p><i>Yucca aloifolia</i>, <i>Y. aloifolia variegata, Y. elephantipes</i> and <i>Y. filamentosa</i> were investigated. DNA sequencing was performed for the four plants and a genomic DNA fingerprint was obtained and provided. The cytotoxic activities against four human cancer cell lines were investigated. The ethanolic extracts of leaves of <i>Y. aloifolia variegata</i> prevailed, especially against liver cancer HepG-2 and breast cancer MCF-7. <i>In vivo</i> assessment of hepatoprotective activity in rats also revealed the hepatoprotective potential of the ethanolic extracts of the four plants against CCl<sub>4</sub>- induced rats’ liver damage. Qualitative and quantitative analysis of the flavonoid and phenolic content of the promising species was performed using HPLC. The analysis identified and quantified 18 flavonoids and 19 phenolic acids in the different fractions of <i>Y. aloifolia variegat</i>a, among which the major flavonoids were hesperidin and kaemp-3-(2-<i>p</i>-coumaroyl) glucose and the major phenolic acids were gallic acid and protocatechuic acid.</p
    corecore