268 research outputs found

    Perception of Groundnuts Leaf Disease by Neural Network with Progressive Re-Sizing

    Get PDF
    India is the world's second-largest groundnut producer after Brazil. An major crop of oilseeds is groundnuts. Because of this, the crop's quality and yield have declined, which has had a detrimental effect on the agricultural economy. This is partly because the crop is more susceptible to various diseases. It is required to create more precise and reliable automated approaches to address this problem and improve the identification of groundnut leaf diseases. This article proposes a deep learning-driven approach based on a progressive scaling technique for the accurate classification and identification of groundnut leaf diseases. The five main groundnut leaf diseases that are the subject of this study are leaf spot, armyworm effect, wilts, yellow leaf, and healthy leaf. The proposed model is trained using both progressive resizing and conventional techniques, and its performance is assessed using cross-entropy loss. A fresh dataset is meticulously curated in Gujarat state, India's Saurashtra region, for training and validation. Due to the dataset's uneven sample distribution across disease categories, an extended focus loss function was used to correct this class imbalance. In order to evaluate the performance of the suggested model, a number of performance metrics are utilized, including accuracy, sensitivity, F1-score, precision, and sensitivity. Notably, the suggested model has a 96.12% success rate, which signifies a considerable increase in the disease identification accuracy. It's important to note that the model incorporating progressive resizing beats the basic neural network-based model based on cross-entropy loss, highlighting the potency of the recommended approach

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research

    Proceedings, MSVSCC 2018

    Get PDF
    Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp

    Intraoperative Quantification of Bone Perfusion in Lower Extremity Injury Surgery

    Get PDF
    Orthopaedic surgery is one of the most common surgical categories. In particular, lower extremity injuries sustained from trauma can be complex and life-threatening injuries that are addressed through orthopaedic trauma surgery. Timely evaluation and surgical debridement following lower extremity injury is essential, because devitalized bones and tissues will result in high surgical site infection rates. However, the current clinical judgment of what constitutes ā€œdevitalized tissueā€ is subjective and dependent on surgeon experience, so it is necessary to develop imaging techniques for guiding surgical debridement, in order to control infection rates and to improve patient outcome. In this thesis work, computational models of fluorescence-guided debridement in lower extremity injury surgery will be developed, by quantifying bone perfusion intraoperatively using Dynamic contrast-enhanced fluorescence imaging (DCE-FI) system. Perfusion is an important factor of tissue viability, and therefore quantifying perfusion is essential for fluorescence-guided debridement. In Chapters 3-7 of this thesis, we explore the performance of DCE-FI in quantifying perfusion from benchtop to translation: We proposed a modified fluorescent microsphere quantification technique using cryomacrotome in animal model. This technique can measure bone perfusion in periosteal and endosteal separately, and therefore to validate bone perfusion measurements obtained by DCE-FI; We developed pre-clinical rodent contaminated fracture model to correlate DCE-FI with infection risk, and compare with multi-modality scanning; Furthermore in clinical studies, we investigated first-pass kinetic parameters of DCE-FI and arterial input functions for characterization of perfusion changes during lower limb amputation surgery; We conducted the first in-human use of dynamic contrast-enhanced texture analysis for orthopaedic trauma classification, suggesting that spatiotemporal features from DCE-FI can classify bone perfusion intraoperatively with high accuracy and sensitivity; We established clinical machine learning infection risk predictive model on open fracture surgery, where pixel-scaled prediction on infection risk will be accomplished. In conclusion, pharmacokinetic and spatiotemporal patterns of dynamic contrast-enhanced imaging show great potential for quantifying bone perfusion and prognosing bone infection. The thesis work will decrease surgical site infection risk and improve successful rates of lower extremity injury surgery

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer upĀ to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Capturing interpretational uncertainty of depositional environments with Artificial Intelligence

    Get PDF
    Geological interpretations are always linked with interpretational and conceptual uncertainty, which is difficult to elicit and quantify, often creating unquantified risks for understanding the subsurface. The complexity and variability of geological systems may lead geologists to analyse the same data and arrive at different conclusions based on their subjective interpretations, personal expertise, or biases. In order to address the associated uncertainty, it is valuable to consider multiple plausible interpretations of outcrop data and acknowledge the degree of ambiguity associated with each interpretation. By examining a diverse range of outcrop analogues, it becomes possible to derive multiple potential geological interpretations and identify variations within and across depositional systems. This thesis proposes a new AI system that learns valuable geological information from surface data (outcrop images), transfers this knowledge to the fragmented data of the subsurface (core data), and finally, links all the extracted information with the geological literature to produce plausible interpretations of the depositional environment based on a single outcrop image. To identify patterns and geological features within image data, three Supervised Learning Computer Vision techniques were employed: Image Classification, Object Detection, and Instance Segmentation. Natural Language Processing was utilised to extract geological features from textual information from heritage geological texts, thus complementing the analysis. Lastly, a custom Neural Network was deployed to assimilate the gathered information into meaningful sequences, apply geological constraints to these sequences, and generate multiple plausible interpretational scenarios, ranked in descending order of probability. The results of this study demonstrate that combining approaches from different areas of Artificial Intelligence within cross-disciplinary workflows under the umbrella of a broader AI system holds significant potential for subsurface characterization, better risk analysis, and potentially enhancing decision-making under uncertain conditions during subsurface exploration stages.Heriot-Watt University fundin

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently ā€“ to become ā€˜smartā€™ and ā€˜sustainableā€™. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ā€˜bigā€™ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently ā€“ to become ā€˜smartā€™ and ā€˜sustainableā€™. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ā€˜bigā€™ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
    • ā€¦
    corecore