4,105 research outputs found

    Optimizing DUS testing for Chimonanthus praecox using feature selection based on a genetic algorithm

    Get PDF
    Chimonanthus praecox is a famous traditional flower in China with high ornamental value. It has numerous varieties, yet its classification is highly disorganized. The distinctness, uniformity, and stability (DUS) test enables the classification and nomenclature of various species; thus, it can be used to classify the Chimonanthus varieties. In this study, flower traits were quantified using an automatic system based on pattern recognition instead of traditional manual measurement to improve the efficiency of DUS testing. A total of 42 features were quantified, including 28 features in the DUS guidelines and 14 new features proposed in this study. Eight algorithms were used to classify wintersweet, and the random forest (RF) algorithm performed the best when all features were used. The classification accuracy of the outer perianth was the highest when the features of the different parts were used for classification. A genetic algorithm was used as the feature selection algorithm to select a set of 22 reduced core features and improve the accuracy and efficiency of the classification. Using the core feature set, the classification accuracy of the RF model improved to 99.13%. Finally, K-means was used to construct a pedigree cluster tree of 23 varieties of wintersweet; evidently, wintersweet was clustered into a single class, which can be the basis for further study of genetic relationships among varieties. This study provides a novel method for DUS detection, variety identification, and pedigree analysis

    Land scene classification from remote sensing images using improved artificial bee colony optimization algorithm

    Get PDF
    The images obtained from remote sensing consist of background complexities and similarities among the objects that act as challenge during the classification of land scenes. Land scenes are utilized in various fields such as agriculture, urbanization, and disaster management, to detect the condition of land surfaces and help to identify the suitability of the land surfaces for planting crops, and building construction. The existing methods help in the classification of land scenes through the images obtained from remote sensing technology, but the background complexities and presence of similar objects act as a barricade against providing better results. To overcome these issues, an improved artificial bee colony optimization algorithm with convolutional neural network (IABC-CNN) model is proposed to achieve better results in classifying the land scenes. The images are collected from aerial image dataset (AID), Northwestern Polytechnical University-Remote Sensing Image Scene 45 (NWPU-RESIS45), and University of California Merced (UCM) datasets. IABC effectively selects the best features from the extracted features using visual geometry group-16 (VGG-16). The selected features from the IABC are provided for the classification process using multiclass-support vector machine (MSVM). Results obtained from the proposed IABC-CNN achieves a better classification accuracy of 96.40% with an error rate 3.6%

    Strategy Tripod Perspective on the Determinants of Airline Efficiency in A Global Context: An Application of DEA and Tobit Analysis

    Get PDF
    The airline industry is vital to contemporary civilization since it is a key player in the globalization process: linking regions, fostering global commerce, promoting tourism and aiding economic and social progress. However, there has been little study on the link between the operational environment and airline efficiency. Investigating the amalgamation of institutions, organisations and strategic decisions is critical to understanding how airlines operate efficiently. This research aims to employ the strategy tripod perspective to investigate the efficiency of a global airline sample using a non-parametric linear programming method (data envelopment analysis [DEA]). Using a Tobit regression, the bootstrapped DEA efficiency change scores are further regressed to determine the drivers of efficiency. The strategy tripod is employed to assess the impact of institutions, industry and resources on airline efficiency. Institutions are measured by global indices of destination attractiveness; industry, including competition, jet fuel and business model; and finally, resources, such as the number of full-time employees, alliances, ownership and connectivity. The first part of the study uses panel data from 35 major airlines, collected from their annual reports for the period 2011 to 2018, and country attractiveness indices from global indicators. The second part of the research involves a qualitative data collection approach and semi-structured interviews with experts in the field to evaluate the impact of COVID-19 on the first part’s significant findings. The main findings reveal that airlines operate at a highly competitive level regardless of their competition intensity or origin. Furthermore, the unpredictability of the environment complicates airline operations. The efficiency drivers of an airline are partially determined by its type of business model, its degree of cooperation and how fuel cost is managed. Trade openness has a negative influence on airline efficiency. COVID-19 has toppled the airline industry, forcing airlines to reconsider their business model and continuously increase cooperation. Human resources, sustainability and alternative fuel sources are critical to airline survival. Finally, this study provides some evidence for the practicality of the strategy tripod and hints at the need for a broader approach in the study of international strategies

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Construction cost prediction system based on Random Forest optimized by the Bird Swarm Algorithm

    Get PDF
    Predicting construction costs often involves disadvantages, such as low prediction accuracy, poor promotion value and unfavorable efficiency, owing to the complex composition of construction projects, a large number of personnel, long working periods and high levels of uncertainty. To address these concerns, a prediction index system and a prediction model were developed. First, the factors influencing construction cost were first identified, a prediction index system including 14 secondary indexes was constructed and the methods of obtaining data were presented elaborately. A prediction model based on the Random Forest (RF) algorithm was then constructed. Bird Swarm Algorithm (BSA) was used to optimize RF parameters and thereby avoid the effect of the random selection of RF parameters on prediction accuracy. Finally, the engineering data of a construction company in Xinyu, China were selected as a case study. The case study showed that the maximum relative error of the proposed model was only 1.24%, which met the requirements of engineering practice. For the selected cases, the minimum prediction index system that met the requirement of prediction accuracy included 11 secondary indexes. Compared with classical metaheuristic optimization algorithms (Particle Swarm Optimization, Genetic Algorithms, Tabu Search, Simulated Annealing, Ant Colony Optimization, Differential Evolution and Artificial Fish School), BSA could more quickly determine the optimal combination of calculation parameters, on average. Compared with the classical and latest forecasting methods (Back Propagation Neural Network, Support Vector Machines, Stacked Auto-Encoders and Extreme Learning Machine), the proposed model exhibited higher forecasting accuracy and efficiency. The prediction model proposed in this study could better support the prediction of construction cost, and the prediction results provided a basis for optimizing the cost management of construction projects

    Enhancing the forensic comparison process of common trace materials through the development of practical and systematic methods

    Get PDF
    An ongoing advancement in forensic trace evidence has driven the development of new and objective methods for comparing various materials. While many standard guides have been published for use in trace laboratories, different areas require a more comprehensive understanding of error rates and an urgent need for harmonizing methods of examination and interpretation. Two critical areas are the forensic examination of physical fits and the comparison of spectral data, which depend highly on the examiner’s judgment. The long-term goal of this study is to advance and modernize the comparative process of physical fit examinations and spectral interpretation. This goal is fulfilled through several avenues: 1) improvement of quantitative-based methods for various trace materials, 2) scrutiny of the methods through interlaboratory exercises, and 3) addressing fundamental aspects of the discipline using large experimental datasets, computational algorithms, and statistical analysis. A substantial new body of knowledge has been established by analyzing population sets of nearly 4,000 items representative of casework evidence. First, this research identifies material-specific relevant features for duct tapes and automotive polymers. Then, this study develops reporting templates to facilitate thorough and systematic documentation of an analyst’s decision-making process and minimize risks of bias. It also establishes criteria for utilizing a quantitative edge similarity score (ESS) for tapes and automotive polymers that yield relatively high accuracy (85% to 100%) and, notably, no false positives. Finally, the practicality and performance of the ESS method for duct tape physical fits are evaluated by forensic practitioners through two interlaboratory exercises. Across these studies, accuracy using the ESS method ranges between 95-99%, and again no false positives are reported. The practitioners’ feedback demonstrates the method’s potential to assist in training and improve peer verifications. This research also develops and trains computational algorithms to support analysts making decisions on sample comparisons. The automated algorithms in this research show the potential to provide objective and probabilistic support for determining a physical fit and demonstrate comparative accuracy to the analyst. Furthermore, additional models are developed to extract feature edge information from the systematic comparison templates of tapes and textiles to provide insight into the relative importance of each comparison feature. A decision tree model is developed to assist physical fit examinations of duct tapes and textiles and demonstrates comparative performance to the trained analysts. The computational tools also evaluate the suitability of partial sample comparisons that simulate situations where portions of the item are lost or damaged. Finally, an objective approach to interpreting complex spectral data is presented. A comparison metric consisting of spectral angle contrast ratios (SCAR) is used as a model to assess more than 94 different-source and 20 same-source electrical tape backings. The SCAR metric results in a discrimination power of 96% and demonstrates the capacity to capture information on the variability between different-source samples and the variability within same-source samples. Application of the random-forest model allows for the automatic detection of primary differences between samples. The developed threshold could assist analysts with making decisions on the spectral comparison of chemically similar samples. This research provides the forensic science community with novel approaches to comparing materials commonly seen in forensic laboratories. The outcomes of this study are anticipated to offer forensic practitioners new and accessible tools for incorporation into current workflows to facilitate systematic and objective analysis and interpretation of forensic materials and support analysts’ opinions

    Predict, diagnose, and treat chronic kidney disease with machine learning: a systematic literature review

    Get PDF
    Objectives In this systematic review we aimed at assessing how artificial intelligence (AI), including machine learning (ML) techniques have been deployed to predict, diagnose, and treat chronic kidney disease (CKD). We systematically reviewed the available evidence on these innovative techniques to improve CKD diagnosis and patient management.Methods We included English language studies retrieved from PubMed. The review is therefore to be classified as a "rapid review ", since it includes one database only, and has language restrictions; the novelty and importance of the issue make missing relevant papers unlikely. We extracted 16 variables, including: main aim, studied population, data source, sample size, problem type (regression, classification), predictors used, and performance metrics. We followed the Preferred Reporting Items for Systematic Reviews (PRISMA) approach; all main steps were done in duplicate. The review was registered on PROSPERO.ResultsFrom a total of 648 studies initially retrieved, 68 articles met the inclusion criteria.Models, as reported by authors, performed well, but the reported metrics were not homogeneous across articles and therefore direct comparison was not feasible. The most common aim was prediction of prognosis, followed by diagnosis of CKD. Algorithm generalizability, and testing on diverse populations was rarely taken into account. Furthermore, the clinical evaluation and validation of the models/algorithms was perused; only a fraction of the included studies, 6 out of 68, were performed in a clinical context.Conclusions Machine learning is a promising tool for the prediction of risk, diagnosis, and therapy management for CKD patients. Nonetheless, future work is needed to address the interpretability, generalizability, and fairness of the models to ensure the safe application of such technologies in routine clinical practice

    LEAP: Efficient and Automated Test Method for NLP Software

    Full text link
    The widespread adoption of DNNs in NLP software has highlighted the need for robustness. Researchers proposed various automatic testing techniques for adversarial test cases. However, existing methods suffer from two limitations: weak error-discovering capabilities, with success rates ranging from 0% to 24.6% for BERT-based NLP software, and time inefficiency, taking 177.8s to 205.28s per test case, making them challenging for time-constrained scenarios. To address these issues, this paper proposes LEAP, an automated test method that uses LEvy flight-based Adaptive Particle swarm optimization integrated with textual features to generate adversarial test cases. Specifically, we adopt Levy flight for population initialization to increase the diversity of generated test cases. We also design an inertial weight adaptive update operator to improve the efficiency of LEAP's global optimization of high-dimensional text examples and a mutation operator based on the greedy strategy to reduce the search time. We conducted a series of experiments to validate LEAP's ability to test NLP software and found that the average success rate of LEAP in generating adversarial test cases is 79.1%, which is 6.1% higher than the next best approach (PSOattack). While ensuring high success rates, LEAP significantly reduces time overhead by up to 147.6s compared to other heuristic-based methods. Additionally, the experimental results demonstrate that LEAP can generate more transferable test cases and significantly enhance the robustness of DNN-based systems.Comment: Accepted at ASE 202
    • …
    corecore