63 research outputs found

    Mastering the Production of Electric Vehicles as One of the Modern Instruments for the Development of the Iranian Automotive Industry

    Get PDF
    The article analyzes the problems of introducing electric vehicles, as wellas their difference from cars with internal combustion engines. This type oftransport has long been included in our everyday life. Today, in the era ofthe heyday of technology, a person understands that cars with an internalcombustion engine (ICE) are almost on the edge of their existence. Atpresent, the development of the production of electric vehicles should beconsidered as a promising direction of the Iranian automobile industry. Atthe moment, this market in Iran is not yet occupied by foreign companies,and therefore national companies have a chance to use the strategy of“growth together with the market”

    An integrated neural network algorithm for optimum performance assessment of auto industry with multiple outputs and corrupted data and noise

    Get PDF
    In the real world encountering with noisy and corrupted data is unavoidable. Auto industry sector (AIS) as a one of the significant industry encounters with noisy and corrupted data regarding to its rapid development. Therefore, developing the performance assessment in this situation is so helpful for this industry. As Data envelopment Analysis (DEA) could not deal with noisy and corrupted data, the alternative method(s) is very important. As one of excellent and promising feature of artificial neural networks (ANNs) are theirs flexibility and robustness in noisy situation, they are a good alternative. This study proposes a non-parametric efficiency frontier analysis method based on the adaptive neural network technique for measuring efficiency as a complementary tool for the common techniques for efficiency assessment in the previous studies. The proposed computational method is able to find a stochastic frontier based on a set of input–output observational data and do not require explicit assumptions about the function structure of the stochastic frontier. In this algorithm, for calculating the efficiency scores of auto industry in various countries, a similar approach to econometric methods has been used. Moreover, the effect of the return to scale of AIS on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). Another feature of proposed algorithm is its ability to calculate efficiency for multiple outputs. An example using real data is presented for illustrative purposes. In the application to the auto industries, we find that the neural network provide more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored. To test the robustness of the efficiency results of the proposed method, the ability of proposed ANN algorithm in dealing with noisy and corrupted data is compared with Data Envelopment Analysis (DEA). Results of the robustness check show that the proposed algorithm is much more robust to the noise and corruption in input data than DEA

    From Big Scholarly Data to Solution-Oriented Knowledge Repository

    Full text link
    The volume of scientific articles grows rapidly, producing a scientific basis for understanding and identifying research problems and state-of-the-art solutions. Despite the considerable significance of the problem-solving information, existing scholarly recommending systems lack the ability to retrieve this information from the scientific articles for generating knowledge repositories and providing problem-solving recommendations. To address this issue, this paper proposes a novel framework to build solution-oriented knowledge repositories and provide recommendations to solve given research problems. The framework consists of three modules: a semantics-based information extraction module mining research problems and solutions from massive academic papers; a knowledge assessment module based on the heterogeneous bibliometric graph and a ranking algorithm; and a knowledge repository generation module to produce solution-oriented maps with recommendations. Based on the framework, a prototype scholarly solution support system is implemented. A case study is carried out in the research field of intrusion detection, and the results demonstrate the effectiveness and efficiency of the proposed method

    Analysing academic paper ranking algorithms using test data and benchmarks:an investigation

    Get PDF
    Research on academic paper ranking has received great attention in recent years, and many algorithms have been proposed to automatically assess a large number of papers for this purpose. How to evaluate or analyse the performance of these ranking algorithms becomes an open research question. Theoretically, evaluation of an algorithm requires to compare its ranking result against a ground truth paper list. However, such ground truth does not exist in the field of scholarly ranking due to the fact that there does not and will not exist an absolutely unbiased, objective, and unified standard to formulate the impact of papers. Therefore, in practice researchers evaluate or analyse their proposed ranking algorithms by different methods, such as using domain expert decisions (test data) and comparing against predefined ranking benchmarks. The question is whether using different methods leads to different analysis results, and if so, how should we analyse the performance of the ranking algorithms? To answer these questions, this study compares among test data and different citation-based benchmarks by examining their relationships and assessing the effect of the method choices on their analysis results. The results of our experiments show that there does exist difference in analysis results when employing test data and different benchmarks, and relying exclusively on one benchmark or test data may bring inadequate analysis results. In addition, a guideline on how to conduct a comprehensive analysis using multiple benchmarks from different perspectives is summarised, which can help provide a systematic understanding and profile of the analysed algorithms.</p

    A data-driven decision support framework for DEA target setting:an explainable AI approach

    Get PDF
    The intention of target setting for Decision-Making Units (DMUs) in Data Envelopment Analysis (DEA) is to perform better than their peers or reach a reference efficiency level. However, most of the time, the logic behind the target setting is based on mathematical models, which are not achievable in practice. Besides, these models are based on decreasing/increasing inputs/outputs that might not be feasible based on DMU's potential in the real world. We propose a data-driven decision support framework to set actionable and feasible targets based on vital inputs-outputs for target setting. To do so, DMUs are classified in their corresponding Efficiency Frontier (EF) levels based on multiple EFs approach and a machine learning classifier. Then, the vital inputs-outputs are determined using an Explainable Artificial Intelligence (XAI) method. Finally, a Multi-Objective Counterfactual Explanation is developed based on DEA (MOCE-DEA) to lead DMU in reaching the reference EF by adjusting actionable and feasible inputs-outputs. We studied Iranian hospitals to evaluate the proposed framework and presented two cases to demonstrate its mechanism. The results show that the performance of the DMUs is improved to reach the reference EF for studied cases. Then, a validation was conducted with the primal DEA model to show the robust improvement of DMUs after adjusting their original value based on the generated solutions by the proposed framework. It demonstrates that the adjusted values can also improve DMUs' performance in the primal DEA model.</p

    A data-driven decision support framework for DEA target setting:an explainable AI approach

    Get PDF
    The intention of target setting for Decision-Making Units (DMUs) in Data Envelopment Analysis (DEA) is to perform better than their peers or reach a reference efficiency level. However, most of the time, the logic behind the target setting is based on mathematical models, which are not achievable in practice. Besides, these models are based on decreasing/increasing inputs/outputs that might not be feasible based on DMU's potential in the real world. We propose a data-driven decision support framework to set actionable and feasible targets based on vital inputs-outputs for target setting. To do so, DMUs are classified in their corresponding Efficiency Frontier (EF) levels based on multiple EFs approach and a machine learning classifier. Then, the vital inputs-outputs are determined using an Explainable Artificial Intelligence (XAI) method. Finally, a Multi-Objective Counterfactual Explanation is developed based on DEA (MOCE-DEA) to lead DMU in reaching the reference EF by adjusting actionable and feasible inputs-outputs. We studied Iranian hospitals to evaluate the proposed framework and presented two cases to demonstrate its mechanism. The results show that the performance of the DMUs is improved to reach the reference EF for studied cases. Then, a validation was conducted with the primal DEA model to show the robust improvement of DMUs after adjusting their original value based on the generated solutions by the proposed framework. It demonstrates that the adjusted values can also improve DMUs' performance in the primal DEA model.</p

    An Investigation of Hepatitis B Virus Genome using Markov Models

    Full text link
    The human genome encodes a family of editing enzymes known as APOBEC3 (apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like 3). Several family members, such as APO-BEC3G, APOBEC3F, and APOBEC3H haplotype II, exhibit activity against viruses such as HIV. These enzymes induce C-to-U mutations in the negative strand of viral genomes, resulting in multiple G-to-A changes, commonly referred to as 'hypermutation.' Mutations catalyzed by these enzymes are sequence context-dependent in the HIV genome; for instance, APOBEC3G preferen-tially mutates G within GG, TGG, and TGGG contexts, while other members mutate G within GA, TGA, and TGAA contexts. However, the same sequence context has not been explored in relation to these enzymes and HBV. In this study, our objective is to identify the mutational footprint of APOBEC3 enzymes in the HBV genome. To achieve this, we employ a multivariable data analytics technique to investigate motif preferences and potential sequence hierarchies of mutation by APOBEC3 enzymes using full genome HBV sequences from a diverse range of naturally infected patients. This approach allows us to distinguish between normal and hypermutated sequences based on the representation of mono- to tetra-nucleotide motifs. Additionally, we aim to identify motifs associated with hypermutation induced by different APOBEC3 enzymes in HBV genomes. Our analyses reveal that either APOBEC3 enzymes are not active against HBV, or the induction of G-to-A mutations by these enzymes is not sequence context-dependent in the HBV genome

    An explainable data-driven decision support framework for strategic customer development

    Get PDF
    Financial institutions benefit from the advanced predictive performance of machine learning algorithms in automatic decision-making for credit scoring. However, two main challenges hamper machine learning algorithms’ applicability in practice: the complex and black-box nature of algorithms that hinder their understandability and the inability to guide rejected customers to have a successful application. Regarding customer relationship management is one of the main responsibilities of financial institutions; they must clarify the decision-making process to guide them. However, financial institutions are not willing to disclose their decision-making procedure to prevent potential risks from customers or competitors side. Hence, in this study, a decision support framework is proposed to clarify the decision-making process and model strategic decision-making to guide rejected customers simultaneously. To do so, after classifying customers in their corresponding groups, the capability of Shapley additive exPlanations method is exploited to extract the most impactful features to the prediction’s outcome globally and locally. Then, based on the benchmarking approach, the equivalent approved peer is found for the rejected customer for target setting to modify the application. To find the optimal modified values for a counterfactual prediction, a multi-objective gamed-based counterfactual explanation model is developed using the prisoner’s dilemma game as the constraint to simulate strategic decision-making. After optimization, the decision is reported to the customers concerning the credential background. A public data set is used to elaborate on the proposed framework. This framework can generate counterfactual predictions successfully by modifying perspective features

    Improved Estimation of Sir in Mobile Cdma Systems by Integration of Artificial Neural Network and Time Series Technique

    Get PDF
    Abstract: This study presents an integrated Artificial Neural Network (ANN) and time series framework to estimate and predict Signal to Interference Ratio (SIR) in Direct Sequence Code Division Multiple Access (DS/CDMA) systems. It is difficult to model uncertain behavior of SIR with only conventional ANN or time series and the integrated algorithm could be an ideal substitute for such cases. Artificial Neural Network (ANN) approach based on supervised multi layer perceptron (MLP) network are used in the proposed algorithm. All type of ANN-MLP are examined in present study. At last, Coefficient of Determination (R ) is used for selecting preferred model from different 2 constructed MLP-ANN. One of unique feature of the proposed algorithm is utilization of Autocorrelation Function (ACF) to define input variables whereas conventional methods which use trial and error method. This is the first study that integrates ANN and time series for improved estimation of SIR in mobile CDMA systems

    Few-shot Class-incremental Learning for 3D Point Cloud Objects

    Full text link
    Few-shot class-incremental learning (FSCIL) aims to incrementally fine-tune a model (trained on base classes) for a novel set of classes using a few examples without forgetting the previous training. Recent efforts address this problem primarily on 2D images. However, due to the advancement of camera technology, 3D point cloud data has become more available than ever, which warrants considering FSCIL on 3D data. This paper addresses FSCIL in the 3D domain. In addition to well-known issues of catastrophic forgetting of past knowledge and overfitting of few-shot data, 3D FSCIL can bring newer challenges. For example, base classes may contain many synthetic instances in a realistic scenario. In contrast, only a few real-scanned samples (from RGBD sensors) of novel classes are available in incremental steps. Due to the data variation from synthetic to real, FSCIL endures additional challenges, degrading performance in later incremental steps. We attempt to solve this problem using Microshapes (orthogonal basis vectors) by describing any 3D objects using a pre-defined set of rules. It supports incremental training with few-shot examples minimizing synthetic to real data variation. We propose new test protocols for 3D FSCIL using popular synthetic datasets (ModelNet and ShapeNet) and 3D real-scanned datasets (ScanObjectNN and CO3D). By comparing state-of-the-art methods, we establish the effectiveness of our approach in the 3D domain
    • …
    corecore