6,215 research outputs found

    Integrating expert-based objectivist and nonexpert-based subjectivist paradigms in landscape assessment

    Get PDF
    This thesis explores the integration of objective and subjective measures of landscape aesthetics, particularly focusing on crowdsourced geo-information. It addresses the increasing importance of considering public perceptions in national landscape governance, in line with the European Landscape Convention's emphasis on public involvement. Despite this, national landscape assessments often remain expert-centric and top-down, facing challenges in resource constraints and limited public engagement. The thesis leverages Web 2.0 technologies and crowdsourced geographic information, examining correlations between expert-based metrics of landscape quality and public perceptions. The Scenic-Or-Not initiative for Great Britain, GIS-based Wildness spatial layers, and LANDMAP dataset for Wales serve as key datasets for analysis. The research investigates the relationships between objective measures of landscape wildness quality and subjective measures of aesthetics. Multiscale geographically weighted regression (MGWR) reveals significant correlations, with different wildness components exhibiting varying degrees of association. The study suggests the feasibility of incorporating wildness and scenicness measures into formal landscape aesthetic assessments. Comparing expert and public perceptions, the research identifies preferences for water-related landforms and variations in upland and lowland typologies. The study emphasizes the agreement between experts and non-experts on extreme scenic perceptions but notes discrepancies in mid-spectrum landscapes. To overcome limitations in systematic landscape evaluations, an integrative approach is proposed. Utilizing XGBoost models, the research predicts spatial patterns of landscape aesthetics across Great Britain, based on the Scenic-Or-Not initiatives, Wildness spatial layers, and LANDMAP data. The models achieve comparable accuracy to traditional statistical models, offering insights for Landscape Character Assessment practices and policy decisions. While acknowledging data limitations and biases in crowdsourcing, the thesis discusses the necessity of an aggregation strategy to manage computational challenges. Methodological considerations include addressing the modifiable areal unit problem (MAUP) associated with aggregating point-based observations. The thesis comprises three studies published or submitted for publication, each contributing to the understanding of the relationship between objective and subjective measures of landscape aesthetics. The concluding chapter discusses the limitations of data and methods, providing a comprehensive overview of the research

    Enhanced corn seed disease classification: leveraging MobileNetV2 with feature augmentation and transfer learning

    Get PDF
    In the era of advancing artificial intelligence (AI), its application in agriculture has become increasingly pivotal. This study explores the integration of AI for the discriminative classification of corn diseases, addressing the need for efficient agricultural practices. Leveraging a comprehensive dataset, the study encompasses 21,662 images categorized into four classes: Broken, Discolored, Silk cut, and Pure. The proposed model, an enhanced iteration of MobileNetV2, strategically incorporates additional layers—Average Pooling, Flatten, Dense, Dropout, and softmax—augmenting its feature extraction capabilities. Model tuning techniques, including data augmentation, adaptive learning rate, model checkpointing, dropout, and transfer learning, fortify the model's efficiency. Results showcase the proposed model's exceptional performance, achieving an accuracy of ~96% across the four classes. Precision, recall, and F1-score metrics underscore the model's proficiency, with precision values ranging from 0.949 to 0.975 and recall values from 0.957 to 0.963. In a comparative analysis with state-of-the-art (SOTA) models, the proposed model outshines counterparts in terms of precision, recall, F1-score, and accuracy. Notably, MobileNetV2, the base model for the proposed architecture, achieves the highest values, affirming its superiority in accurately classifying instances within the corn disease dataset. This study not only contributes to the growing body of AI applications in agriculture but also presents a novel and effective model for corn disease classification. The proposed model's robust performance, combined with its competitive edge against SOTA models, positions it as a promising solution for advancing precision agriculture and crop management

    A rule-based machine learning model for financial fraud detection

    Get PDF
    Financial fraud is a growing problem that poses a significant threat to the banking industry, the government sector, and the public. In response, financial institutions must continuously improve their fraud detection systems. Although preventative and security precautions are implemented to reduce financial fraud, criminals are constantly adapting and devising new ways to evade fraud prevention systems. The classification of transactions as legitimate or fraudulent poses a significant challenge for existing classification models due to highly imbalanced datasets. This research aims to develop rules to detect fraud transactions that do not involve any resampling technique. The effectiveness of the rule-based model (RBM) is assessed using a variety of metrics such as accuracy, specificity, precision, recall, confusion matrix, Matthew’s correlation coefficient (MCC), and receiver operating characteristic (ROC) values. The proposed rule-based model is compared to several existing machine learning models such as random forest (RF), decision tree (DT), multi-layer perceptron (MLP), k-nearest neighbor (KNN), naive Bayes (NB), and logistic regression (LR) using two benchmark datasets. The results of the experiment show that the proposed rule-based model beat the other methods, reaching accuracy and precision of 0.99 and 0.99, respectively

    Domesticating AI in medical diagnosis

    Get PDF
    We consider the anticipated adoption of Artificial Intelligence (AI) in medical diagnosis. We examine how seemingly compelling claims are tested as AI tools move into real-world settings and discuss how analysts can develop effective understandings in novel and rapidly changing settings.Four case studies highlight the challenges of utilising diagnostic AI tools at differing stages in their innovation journey. Two ‘upstream’ cases seeking to demonstrate the practical applicability of AI and two ‘downstream’ cases focusing on the roll out and scaling of more established applications.We observed an unfolding uncoordinated process of social learning capturing two key moments: i) experiments to create and establish the clinical potential of AI tools; and, ii) attempts to verify their dependability in clinical settings while extending their scale and scope. Health professionals critically appraise tool performance, relying on them selectively where their results can be demonstrably trusted, in a de facto model of responsible use. We note a shift from procuring stand-alone solutions to deploying suites of AI tools through platforms to facilitate adoption and reduce the costs of procurement, implementation and evaluation which impede the viability of stand-alone solutions.New conceptual frameworks and methodological strategies are needed to address the rapid evolution of AI tools as they move from research settings and are deployed in real-world care across multiple settings. We observe how, in this process of deployment, AI tools become ‘domesticated’. We propose longitudinal and multisite `biographical’ investigations of medical AI rather than snapshot studies of emerging technologies that fail to capture change and variation in performance across contexts

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    The Effectiveness of an Interactive WhatsApp Bot on Listening Skills

    Get PDF
    The present paper attempted to measure the effectiveness of an interactive WhatsApp bot on the listening skills of Omani English as a Foreign Language (EFL) learners. For this purpose, 40 Omani intermediate EFL learners were divided into two groups: a control and an experimental in a higher education institution. A pretest was conducted to ensure the homogeneity of listening skills among all the participants. While both groups received instructions and exercises on listening in class, an interactive WhatsApp bot was designed for the experimental group to receive more instructions and training without time and place limitations. Later, a posttest and a delayed posttest were conducted to compare learners’ performance. The study results showed smooth progress of both groups in listening exams during the posttest and delayed posttest; however, the experimental group’s performance was significantly high. The findings of the study are efficacious and helpful for teachers and learners

    Financial revolution: a systemic analysis of artificial intelligence and machine learning in the banking sector

    Get PDF
    This paper reviews the advances, challenges, and approaches of artificial intelligence (AI) and machine learning (ML) in the banking sector. The use of these technologies is accelerating in various industries, including banking. However, the literature on banking is scattered, making a global understanding difficult. This study reviewed the main approaches in terms of applications and algorithmic models, as well as the benefits and challenges associated with their implementation in banking, in addition to a bibliometric analysis of variables related to the distribution of publications and the most productive countries, as well as an analysis of the co-occurrence and dynamics of keywords. Following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) framework, forty articles were selected for review. The results indicate that these technologies are used in the banking sector for customer segmentation, credit risk analysis, recommendation, and fraud detection. It should be noted that credit analysis and fraud detection are the most implemented areas, using algorithms such as random forests (RF), decision trees (DT), support vector machines (SVM), and logistic regression (LR), among others. In addition, their use brings significant benefits for decision-making and optimizing banking operations. However, the handling of substantial amounts of data with these technologies poses ethical challenges

    The role of nursing in multimorbidity care

    Get PDF
    Background Multimorbidity (the co-occurrence of two or more chronic conditions in the same person) affects around one in three persons, and it is strongly associated with a range of negative outcomes including worsening physical function, increased health care use, and premature death. Due to the way healthcare is provided to people with multimorbidity, treatment can become burdensome, fragmented and inefficient. In people with palliative conditions, multimorbidity is increasingly common. Better models of care are needed. Methods A mixed-methods programme of research designed to inform the development of a nurse-led intervention for people with multimorbidity and palliative conditions. A mixed-methods systematic review explored nurse-led interventions for multimorbidity and their effects on outcomes. A cross-sectional study of 63,328 emergency department attenders explored the association between multimorbidity, complex multimorbidity (≥3 conditions affecting ≥3 body systems), and disease-burden on healthcare use and inpatient mortality. A focussed ethnographic study of people with multimorbidity and life-limiting conditions and their carers (n=12) explored the concept of treatment burden. Findings Nurse-led interventions for people with multimorbidity generally focus on care coordination (i.e., case management or transitional care); patients view them positively, but they do not reliably reduce health care use or costs. Multimorbidity and complex multimorbidity were significantly associated with admission from the emergency department and reattendance within 30 and 90 days. The association was greater in those with more conditions. There was no association with inpatient mortality. People with multimorbidity and palliative conditions experienced treatment burden in a manner consistent with existing theoretical models. This thesis also noted the effect of uncertainty on the balance between capacity and workload and proposes a model of how these concepts relate to one another. Discussion This thesis addresses a gap in what is known about the role of nurses in providing care to the growing number of people with multimorbidity. A theory-based nurse-led intervention is proposed which prioritises managing treatment burden and uncertainty. Conclusions Nursing in an age of multimorbidity necessitates a perspective shift which conceptualises chronic conditions as multiple overlapping phenomena situated within an individual. The role of the nurse should be to help patients navigate the complexity of living with multiple chronic conditions
    • …
    corecore