799 research outputs found

    TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain Credit Assessment Cold Start

    Full text link
    This paper proposes an interpretable two-stream transformer CORAL networks (TransCORALNet) for supply chain credit assessment under the segment industry and cold start problem. The model aims to provide accurate credit assessment prediction for new supply chain borrowers with limited historical data. Here, the two-stream domain adaptation architecture with correlation alignment (CORAL) loss is used as a core model and is equipped with transformer, which provides insights about the learned features and allow efficient parallelization during training. Thanks to the domain adaptation capability of the proposed model, the domain shift between the source and target domain is minimized. Therefore, the model exhibits good generalization where the source and target do not follow the same distribution, and a limited amount of target labeled instances exist. Furthermore, we employ Local Interpretable Model-agnostic Explanations (LIME) to provide more insight into the model prediction and identify the key features contributing to supply chain credit assessment decisions. The proposed model addresses four significant supply chain credit assessment challenges: domain shift, cold start, imbalanced-class and interpretability. Experimental results on a real-world data set demonstrate the superiority of TransCORALNet over a number of state-of-the-art baselines in terms of accuracy. The code is available on GitHub https://github.com/JieJieNiu/TransCORALN .Comment: 13 pages, 7 figure

    Deep Learning in Lane Marking Detection: A Survey

    Get PDF
    Lane marking detection is a fundamental but crucial step in intelligent driving systems. It can not only provide relevant road condition information to prevent lane departure but also assist vehicle positioning and forehead car detection. However, lane marking detection faces many challenges, including extreme lighting, missing lane markings, and obstacle obstructions. Recently, deep learning-based algorithms draw much attention in intelligent driving society because of their excellent performance. In this paper, we review deep learning methods for lane marking detection, focusing on their network structures and optimization objectives, the two key determinants of their success. Besides, we summarize existing lane-related datasets, evaluation criteria, and common data processing techniques. We also compare the detection performance and running time of various methods, and conclude with some current challenges and future trends for deep learning-based lane marking detection algorithm

    Journey of Artificial Intelligence Frontier: A Comprehensive Overview

    Get PDF
    The field of Artificial Intelligence AI is a transformational force with limitless promise in the age of fast technological growth This paper sets out on a thorough tour through the frontiers of AI providing a detailed understanding of its complex environment Starting with a historical context followed by the development of AI seeing its beginnings and growth On this journey fundamental ideas are explored looking at things like Machine Learning Neural Networks and Natural Language Processing Taking center stage are ethical issues and societal repercussions emphasising the significance of responsible AI application This voyage comes to a close by looking ahead to AI s potential for human-AI collaboration ground-breaking discoveries and the difficult obstacles that lie ahead This provides with a well-informed view on AI s past present and the unexplored regions it promises to explore by thoroughly navigating this terrai

    Generative AI in Supply Chain Management

    Get PDF
    A new age of creativity and efficiency is ushered in by the integration of Generative Artificial Intelligence (AI) into supply chain management. This in-depth study examines the diverse effects of generative artificial intelligence on supply chain operations, including risk management, inventory optimization, procurement, logistics, and more. Given the predictive capacity of generative AI, traditional methods have been completely modified, enabling companies to anticipate demand, maximize inventory, and expedite procurement procedures with previously unheard-of accuracy. Real-time adaptation is made possible by its dynamic decision-making skills, which also help to promote resilience against interruptions and enable proactive reactions to changing market conditions. However, there are some challenges in implementing generative AI in supply chains. Obstacles requiring strategic navigation and organizational preparedness include skill gaps, ethical considerations, scalability issues, and data integration complexity. Future directions for generative artificial intelligence in supply networks are extremely promising. Substantial improvements are expected to be driven by advances in explainable AI, predictive analytics, seamless integration, and ethical frameworks. Redefining supply chain models could be facilitated by autonomous supply chains, adaptive resilience to disturbances, and increased transparency in decision-making

    Toward enhancement of deep learning techniques using fuzzy logic: a survey

    Get PDF
    Deep learning has emerged recently as a type of artificial intelligence (AI) and machine learning (ML), it usually imitates the human way in gaining a particular knowledge type. Deep learning is considered an essential data science element, which comprises predictive modeling and statistics. Deep learning makes the processes of collecting, interpreting, and analyzing big data easier and faster. Deep neural networks are kind of ML models, where the non-linear processing units are layered for the purpose of extracting particular features from the inputs. Actually, the training process of similar networks is very expensive and it also depends on the used optimization method, hence optimal results may not be provided. The techniques of deep learning are also vulnerable to data noise. For these reasons, fuzzy systems are used to improve the performance of deep learning algorithms, especially in combination with neural networks. Fuzzy systems are used to improve the representation accuracy of deep learning models. This survey paper reviews some of the deep learning based fuzzy logic models and techniques that were presented and proposed in the previous studies, where fuzzy logic is used to improve deep learning performance. The approaches are divided into two categories based on how both of the samples are combined. Furthermore, the models' practicality in the actual world is revealed

    Novel deep cross-domain framework for fault diagnosis or rotary machinery in prognostics and health management

    Get PDF
    Improving the reliability of engineered systems is a crucial problem in many applications in various engineering fields, such as aerospace, nuclear energy, and water declination industries. This requires efficient and effective system health monitoring methods, including processing and analyzing massive machinery data to detect anomalies and performing diagnosis and prognosis. In recent years, deep learning has been a fast-growing field and has shown promising results for Prognostics and Health Management (PHM) in interpreting condition monitoring signals such as vibration, acoustic emission, and pressure due to its capacity to mine complex representations from raw data. This doctoral research provides a systematic review of state-of-the-art deep learning-based PHM frameworks, an empirical analysis on bearing fault diagnosis benchmarks, and a novel multi-source domain adaptation framework. It emphasizes the most recent trends within the field and presents the benefits and potentials of state-of-the-art deep neural networks for system health management. Besides, the limitations and challenges of the existing technologies are discussed, which leads to opportunities for future research. The empirical study of the benchmarks highlights the evaluation results of the existing models on bearing fault diagnosis benchmark datasets in terms of various performance metrics such as accuracy and training time. The result of the study is very important for comparing or testing new models. A novel multi-source domain adaptation framework for fault diagnosis of rotary machinery is also proposed, which aligns the domains in both feature-level and task-level. The proposed framework transfers the knowledge from multiple labeled source domains into a single unlabeled target domain by reducing the feature distribution discrepancy between the target domain and each source domain. Besides, the model can be easily reduced to a single-source domain adaptation problem. Also, the model can be readily updated to unsupervised domain adaptation problems in other fields such as image classification and image segmentation. Further, the proposed model is modified with a novel conditional weighting mechanism that aligns the class-conditional probability of the domains and reduces the effect of irrelevant source domain which is a critical issue in multi-source domain adaptation algorithms. The experimental verification results show the superiority of the proposed framework over state-of-the-art multi-source domain-adaptation models

    Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

    Full text link
    This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs
    • …
    corecore