2,394 research outputs found

    Generalized Completed Local Binary Patterns for Time-Efficient Steel Surface Defect Classification

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.Efficient defect classification is one of the most important preconditions to achieve online quality inspection for hot-rolled strip steels. It is extremely challenging owing to various defect appearances, large intraclass variation, ambiguous interclass distance, and unstable gray values. In this paper, a generalized completed local binary patterns (GCLBP) framework is proposed. Two variants of improved completed local binary patterns (ICLBP) and improved completed noise-invariant local-structure patterns (ICNLP) under the GCLBP framework are developed for steel surface defect classification. Different from conventional local binary patterns variants, descriptive information hidden in nonuniform patterns is innovatively excavated for the better defect representation. This paper focuses on the following aspects. First, a lightweight searching algorithm is established for exploiting the dominant nonuniform patterns (DNUPs). Second, a hybrid pattern code mapping mechanism is proposed to encode all the uniform patterns and DNUPs. Third, feature extraction is carried out under the GCLBP framework. Finally, histogram matching is efficiently accomplished by simple nearest-neighbor classifier. The classification accuracy and time efficiency are verified on a widely recognized texture database (Outex) and a real-world steel surface defect database [Northeastern University (NEU)]. The experimental results promise that the proposed method can be widely applied in online automatic optical inspection instruments for hot-rolled strip steel.Peer reviewe

    A review of the use of artificial intelligence methods in infrastructure systems

    Get PDF
    The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    Performance Analysis Of Data-Driven Algorithms In Detecting Intrusions On Smart Grid

    Get PDF
    The traditional power grid is no longer a practical solution for power delivery due to several shortcomings, including chronic blackouts, energy storage issues, high cost of assets, and high carbon emissions. Therefore, there is a serious need for better, cheaper, and cleaner power grid technology that addresses the limitations of traditional power grids. A smart grid is a holistic solution to these issues that consists of a variety of operations and energy measures. This technology can deliver energy to end-users through a two-way flow of communication. It is expected to generate reliable, efficient, and clean power by integrating multiple technologies. It promises reliability, improved functionality, and economical means of power transmission and distribution. This technology also decreases greenhouse emissions by transferring clean, affordable, and efficient energy to users. Smart grid provides several benefits, such as increasing grid resilience, self-healing, and improving system performance. Despite these benefits, this network has been the target of a number of cyber-attacks that violate the availability, integrity, confidentiality, and accountability of the network. For instance, in 2021, a cyber-attack targeted a U.S. power system that shut down the power grid, leaving approximately 100,000 people without power. Another threat on U.S. Smart Grids happened in March 2018 which targeted multiple nuclear power plants and water equipment. These instances represent the obvious reasons why a high level of security approaches is needed in Smart Grids to detect and mitigate sophisticated cyber-attacks. For this purpose, the US National Electric Sector Cybersecurity Organization and the Department of Energy have joined their efforts with other federal agencies, including the Cybersecurity for Energy Delivery Systems and the Federal Energy Regulatory Commission, to investigate the security risks of smart grid networks. Their investigation shows that smart grid requires reliable solutions to defend and prevent cyber-attacks and vulnerability issues. This investigation also shows that with the emerging technologies, including 5G and 6G, smart grid may become more vulnerable to multistage cyber-attacks. A number of studies have been done to identify, detect, and investigate the vulnerabilities of smart grid networks. However, the existing techniques have fundamental limitations, such as low detection rates, high rates of false positives, high rates of misdetection, data poisoning, data quality and processing, lack of scalability, and issues regarding handling huge volumes of data. Therefore, these techniques cannot ensure safe, efficient, and dependable communication for smart grid networks. Therefore, the goal of this dissertation is to investigate the efficiency of machine learning in detecting cyber-attacks on smart grids. The proposed methods are based on supervised, unsupervised machine and deep learning, reinforcement learning, and online learning models. These models have to be trained, tested, and validated, using a reliable dataset. In this dissertation, CICDDoS 2019 was used to train, test, and validate the efficiency of the proposed models. The results show that, for supervised machine learning models, the ensemble models outperform other traditional models. Among the deep learning models, densely neural network family provides satisfactory results for detecting and classifying intrusions on smart grid. Among unsupervised models, variational auto-encoder, provides the highest performance compared to the other unsupervised models. In reinforcement learning, the proposed Capsule Q-learning provides higher detection and lower misdetection rates, compared to the other model in literature. In online learning, the Online Sequential Euclidean Distance Routing Capsule Network model provides significantly better results in detecting intrusion attacks on smart grid, compared to the other deep online models

    A review of artificial intelligence in prostate cancer detection on imaging

    Get PDF
    A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care

    An overview on structural health monitoring: From the current state-of-the-art to new bio-inspired sensing paradigms

    Get PDF
    In the last decades, the field of structural health monitoring (SHM) has grown exponentially. Yet, several technical constraints persist, which are preventing full realization of its potential. To upgrade current state-of-the-art technologies, researchers have started to look at nature’s creations giving rise to a new field called ‘biomimetics’, which operates across the border between living and non-living systems. The highly optimised and time-tested performance of biological assemblies keeps on inspiring the development of bio-inspired artificial counterparts that can potentially outperform conventional systems. After a critical appraisal on the current status of SHM, this paper presents a review of selected works related to neural, cochlea and immune-inspired algorithms implemented in the field of SHM, including a brief survey of the advancements of bio-inspired sensor technology for the purpose of SHM. In parallel to this engineering progress, a more in-depth understanding of the most suitable biological patterns to be transferred into multimodal SHM systems is fundamental to foster new scientific breakthroughs. Hence, grounded in the dissection of three selected human biological systems, a framework for new bio-inspired sensing paradigms aimed at guiding the identification of tailored attributes to transplant from nature to SHM is outlined.info:eu-repo/semantics/acceptedVersio
    • …
    corecore