4,203 research outputs found

    Raman microscopy reveals how cell inflammation activates glucose and lipid metabolism

    Get PDF
    Metabolism of endothelial cells (ECs) depends on the availability of the energy substrates. Since the endothelium is the first line of defence against inflammation in the cardiovascular system and its dysfunction can lead to the development of cardiovascular diseases, it is important to understand how glucose metabolism changes during inflammation. In this work, glucose uptake was studied in human microvascular endothelial cells (HMEC-1) in high glucose (HG), and additionally in an inflammatory state, using Raman imaging. HG state was induced by incubation of ECs with a deuterated glucose analogue, while the EC inflammation was caused by TNF-α pre-treatment. Spontaneous and stimulated Raman scattering spectroscopy provided comprehensive information on biochemical changes, including lipids and the extent of unsaturation induced by excess glucose in ECs., induced by excess glucose in ECs. In this work, we indicated spectroscopic markers of metabolic changes in ECs as a strong increase in the ratio of the intensity of lipids / (proteins + lipids) bands and an increase in the level of lipid unsaturation and mitochondrial changes. Inflamed ECs treated with HG, revealed enhanced glucose uptake, and intensified lipid production i.a. of unsaturated lipids. Additionally, increased cytochrome c signal in the mitochondrial region indicated higher mitochondrial activity and biogenesis. Raman spectroscopy is a powerful method for determining the metabolic markers of ED which will better inform understanding of disease onset, development, and treatment

    Empowering Sustainable Agriculture: An Enhanced Deep Learning Model for PD Detection in Agricultural Operation System

    Get PDF
    A country’s financial growth is prejudiced by its rate of agricultural output. Nevertheless, Plant Diseases (PD) pose a substantial obstacle to the cultivation and value of foodstuff. The timely detection of PDs is paramount for public wellness and Sustainable Agriculture (SA) promotion. The conventional diagnostic procedure entails a pathologist’s visual evaluation of a particular plant through in-person visits. Nevertheless, the manual inspection of crop diseases is limited due to its low level of accuracy and the limited availability of skilled workers. To address these concerns, there is a need to develop automated methodologies capable of effectively identifying and classifying a wide range of PDs. The precise detection and categorization of PDs pose a challenging task due to various factors. These include the presence of low-intensity data in both the image’s backdrop and the forefront, the significant similarity in color between normal and diseased plant regions, the presence of noise in the specimens, and the variations in the location, chrominance, framework, and dimensions of plant leaves. This paper presents a novel approach for identifying and categorizing PDs using a Deep Convolutional Neural Network - Transfer Learning (DCNN-TL) technique in the Agricultural Operation System (AOS). The proposed method aims to enhance the capabilities of SA in accurately identifying and categorizing PDs. The improved Deep Learning (DL) methodology incorporates a TL technique based on fine-tuned Visual Geometry Group 19 (VGG19) architecture. The revised system accurately detects and diagnoses five distinct PD categories. Among the evaluated methods, the proposed DCNN-TL in this study shows outstanding precision, recall, and accuracy values of 0.996, 0.9994, and 0.9998, respectively

    Integrated Generative Adversarial Networks and Deep Convolutional Neural Networks for Image Data Classification A Case Study for COVID-19

    Get PDF
    Convolutional Neural Networks (CNNs) have garnered significant utilisation within automated image classification systems. CNNs possess the ability to leverage the spatial and temporal correlations inherent in a dataset. This study delves into the use of cutting-edge deep learning for precise image data classification, focusing on overcoming the difficulties brought on by the COVID-19 pandemic. In order to improve the accuracy and robustness of COVID-19 image classification, the study introduces a novel methodology that combines the strength of Deep Convolutional Neural Networks (DCNNs) and Generative Adversarial Networks (GANs). This proposed study helps to mitigate the lack of labelled coronavirus (COVID-19) images, which has been a standard limitation in related research, and improves the model’s ability to distinguish between COVID-19-related patterns and healthy lung images. The study uses a thorough case study and uses a sizable dataset of chest X-ray images covering COVID-19 cases, other respiratory conditions, and healthy lung conditions. The integrated model outperforms conventional DCNN-based techniques in terms of classification accuracy after being trained on this dataset. To address the issues of an unbalanced dataset, GAN will produce synthetic pictures and extract deep features from every image. A thorough understanding of the model’s performance in real-world scenarios is also provided by the study’s meticulous evaluation of the model’s performance using a variety of metrics, including accuracy, precision, recall, and F1-score

    Fusing hyperspectral imaging and electronic nose data to predict moisture content in Penaeus vannamei during solar drying

    Get PDF
    The control of moisture content (MC) is essential in the drying of shrimp, directly impacting its quality and shelf life. This study aimed to develop an accurate method for determining shrimp MC by integrating hyperspectral imaging (HSI) with electronic nose (E-nose) technology. We employed three different data fusion approaches: pixel-, feature-, and decision-fusion, to combine HSI and E nose data for the prediction of shrimp MC. We developed partial least squares regression (PLSR) models for each method and compared their performance in terms of prediction accuracy. The decision fusion approach outperformed the other methods, producing the highest determination coefficients for both calibration (0.9595) and validation sets (0.9448). Corresponding root-mean square errors were the lowest for the calibration set (0.0370) and validation set (0.0443), indicating high prediction precision. Additionally, this approach achieved a relative percent deviation of 3.94, the highest among the methods tested. The findings suggest that the decision fusion of HSI and E nose data through a PLSR model is an effective, accurate, and efficient method for evaluating shrimp MC. The demonstrated capability of this approach makes it a valuable tool for quality control and market monitoring of dried shrimp products

    Assessing the advancement of artificial intelligence and drones’ integration in agriculture through a bibliometric study

    Get PDF
    Integrating artificial intelligence (AI) with drones has emerged as a promising paradigm for advancing agriculture. This bibliometric analysis investigates the current state of research in this transformative domain by comprehensively reviewing 234 pertinent articles from Scopus and Web of Science databases. The problem involves harnessing AI-driven drones' potential to address agricultural challenges effectively. To address this, we conducted a bibliometric review, looking at critical components, such as prominent journals, co-authorship patterns across countries, highly cited articles, and the co-citation network of keywords. Our findings underscore a growing interest in using AI-integrated drones to revolutionize various agricultural practices. Noteworthy applications include crop monitoring, precision agriculture, and environmental sensing, indicative of the field’s transformative capacity. This pioneering bibliometric study presents a comprehensive synthesis of the dynamic research landscape, signifying the first extensive exploration of AI and drones in agriculture. The identified knowledge gaps point to future research opportunities, fostering the adoption and implementation of these technologies for sustainable farming practices and resource optimization. Our analysis provides essential insights for researchers and practitioners, laying the groundwork for steering agricultural advancements toward an enhanced efficiency and innovation era

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Convolutional neural network ensemble learning for hyperspectral imaging-based blackberry fruit ripeness detection in uncontrolled farm environment

    Get PDF
    Fruit ripeness estimation models have for decades depended on spectral index features or colour-based features, such as mean, standard deviation, skewness, colour moments, and/or histograms for learning traits of fruit ripeness. Recently, few studies have explored the use of deep learning techniques to extract features from images of fruits with visible ripeness cues. However, the blackberry (Rubus fruticosus) fruit does not show obvious and reliable visible traits of ripeness when mature and therefore poses great difficulty to fruit pickers. The mature blackberry, to the human eye, is black before, during, and post-ripening. To address this engineering application challenge, this paper proposes a novel multi-input convolutional neural network (CNN) ensemble classifier for detecting subtle traits of ripeness in blackberry fruits. The multi-input CNN was created from a pre-trained visual geometry group 16-layer deep convolutional network (VGG16) model trained on the ImageNet dataset. The fully connected layers were optimized for learning traits of ripeness of mature blackberry fruits. The resulting model served as the base for building homogeneous ensemble learners that were ensemble using the stack generalization ensemble (SGE) framework. The input to the network is images acquired with a stereo sensor using visible and near-infrared (VIS-NIR) spectral filters at wavelengths of 700 nm and 770 nm. Through experiments, the proposed model achieved 95.1% accuracy on unseen sets and 90.2% accuracy with in-field conditions. Further experiments reveal that machine sensory is highly and positively correlated to human sensory over blackberry fruit skin texture

    Tinto: Multisensor Benchmark for 3-D Hyperspectral Point Cloud Segmentation in the Geosciences

    Get PDF
    The increasing use of deep learning techniques has reduced interpretation time and, ideally, reduced interpreter bias by automatically deriving geological maps from digital outcrop models. However, accurate validation of these automated mapping approaches is a significant challenge due to the subjective nature of geological mapping and the difficulty in collecting quantitative validation data. Additionally, many state-of-the-art deep learning methods are limited to 2-D image data, which is insufficient for 3-D digital outcrops, such as hyperclouds. To address these challenges, we present Tinto, a multisensor benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping, especially for nonstructured 3-D data like point clouds. Tinto comprises two complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data and 2) a synthetic twin that uses latent features in the original datasets to reconstruct realistic spectral data (including sensor noise and processing artifacts) from the ground truth. The point cloud is dense and contains 3242964 labeled points. We used these datasets to explore the abilities of different deep learning approaches for automated geological mapping. By making Tinto publicly available, we hope to foster the development and adaptation of new deep learning tools for 3-D applications in Earth sciences. The dataset can be accessed through this link: https://doi.org/10.14278/rodare.2256

    CNN-LSTM framework to automatically detect anomalies in farmland using aerial images from UAVs

    Get PDF
    Using aerial inspection techniques in farmlands can yield vital data instrumental in mitigating various impediments to optimizing farming practices. Farmland anomalies (standing water and clusters of weeds) can impede farming practices, leading to the improper utilization of farmland and the disruption of agricultural development. Utilizing Unmanned Aerial Vehicles (UAVs) for remote sensing is a highly effective method for obtaining extensive imagery of farmland. Visual data analytics in the context of automatic pattern recognition from collected data is valuable for advancing Deep Learning (DL) -assisted farming models. This approach shows significant potential in enhancing agricultural productivity by effectively capturing crop patterns and identifying anomalies in farmland. Furthermore, it offers prospective solutions to address the inherent barriers farmers encounter. This study introduces a novel framework, namely the hybrid Convolutional Neural Networks and Long Short-Term Memory (HCNN-LSTM), which aims to detect anomalies in farmland using images obtained from UAVs automatically. The system employs a Convolutional Neural Network (CNN) for deep feature extraction, while Long Short-Term Memory (LSTM) is utilized for the detection task, leveraging the extracted features. By integrating these two Deep Learning (DL) architectures, the system attains an extensive knowledge of farm conditions, facilitating the timely identification of irregularities such as the presence of water, clusters of weeds, nutrient deficit, and crop disease. The proposed methodology is trained and evaluated using the Agriculture-Vision challenge database. The results obtained from the experiment demonstrate that the proposed system has achieved a high level of accuracy, with a value of 99.7%, confirming the effectiveness of the proposed approach

    A tree species classification model based on improved YOLOv7 for shelterbelts

    Get PDF
    Tree species classification within shelterbelts is crucial for shelterbelt management. The large-scale satellite-based and low-altitude drone-based approaches serve as powerful tools for forest monitoring, especially in tree species classification. However, these methods face challenges in distinguishing individual tree species within complex backgrounds. Additionally, the mixed growth of trees within protective forest suffers from similar crown size among different tree species. The complex background of the shelterbelts negatively impacts the accuracy of tree species classification. The You Only Look Once (YOLO) algorithm is widely used in the field of agriculture and forestry, ie., plant and fruit identification, pest and disease detection, and tree species classification in forestry. We proposed a YOLOv7-Kmeans++_CoordConv_CBAM (YOLOv7-KCC) model for tree species classification based on drone RGB remote sensing images. Firstly, we constructed a dataset for tree species in shelterbelts and adopted data augmentation methods to mitigate overfitting due to limited training data. Secondly, the K-means++ algorithm was employed to cluster anchor boxes in the dataset. Furthermore, to enhance the YOLOv7 backbone network’s Efficient Layer Aggregation Network (ELAN) module, we used Coordinate Convolution (CoordConv) replaced the ordinary 1Ă—1 convolution. The Convolutional Block Attention Module (CBAM) was integrated into the Path Aggregation Network (PANet) structure to facilitate multiscale feature extraction and fusion, allowing the network to better capture and utilize crucial feature information. Experimental results showed that the YOLOv7-KCC model achieves a mean average [email protected] of 98.91%, outperforming the Faster RCNN-VGG16, Faster RCNN-Resnet50, SSD, YOLOv4, and YOLOv7 models by 5.71%, 11.75%, 5.97%, 7.86%, and 3.69%, respectively. The GFlops and Parameter values of the YOLOv7-KCC model stand at 105.07G and 143.7MB, representing an almost 5.6% increase in F1 metrics compared to YOLOv7. Therefore, the proposed YOLOv7-KCC model can effectively classify shelterbelt tree species, providing a scientific theoretical basis for shelterbelt management in Northwest China focusing on Xinjiang
    • …
    corecore