3,692 research outputs found

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    Intelligent manipulation technique for multi-branch robotic systems

    Get PDF
    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system

    Single-sensor and real-time ultrasonic imaging using an AI-driven disordered metasurface

    Full text link
    Non-destructive testing and medical diagnostic techniques using ultrasound has become indispensable in evaluating the state of materials or imaging the internal human body, respectively. To conduct spatially resolved high-quality observations, conventionally, sophisticated phased arrays are used both at the emitting and receiving ends of the setup. In comparison, single-sensor imaging techniques offer significant benefits including compact physical dimensions and reduced manufacturing expenses. However, recent advances such as compressive sensing have shown that this improvement comes at the cost of additional time-consuming dynamic spatial scanning or multi-mode mask switching, which severely hinders the quest for real-time imaging. Consequently, real-time single-sensor imaging, at low cost and simple design, still represents a demanding and largely unresolved challenge till this day. Here, we bestow on ultrasonic metasurface with both disorder and artificial intelligence (AI). The former ensures strong dispersion and highly complex scattering to encode the spatial information into frequency spectra at an arbitrary location, while the latter is used to decode instantaneously the amplitude and spectral component of the sample under investigation. Thus, thanks to this symbiosis, we demonstrate that a single fixed sensor suffices to recognize complex ultrasonic objects through the random scattered field from an unpretentious metasurface, which enables real-time and low-cost imaging, easily extendable to 3D

    Master of Science

    Get PDF
    thesisNondestructive evaluation (NDE) is a means of assessing the reliability and integrity of a structural component and provides such information as the presence, location, extent, and type of damage in the component. Structural health monitoring (SHM) is a subfield of NDE, and focuses on a continuous monitoring of a structure while in use. SHM has been applied to structures such as bridges, buildings, pipelines, and airplanes with the goal of detecting the presence of damage as a means of determining whether a structure is in need of maintenance. SHM can be posed as a modeling problem, where an accurate model allows for a more reliable prediction of structural behavior. More reliable predictions make it easier to determine if something is out of the ordinary with the structure. Structural models can be designed using analytical or empirical approaches. Most SHM applications use purely analytical models based on finite element analysis and fundamental wave propagation equations to construct behavioral predictions. Purely empirical models exist, but are less common. These often utilize pattern recognition algorithms to recognize features that indicate damage. This thesis uses a method related to the k-means algorithm known as dictionary learning to train a wave propagation model from full wavefield data. These data are gathered from thin metal plates that exhibit complex wavefields dominated by multipath interference. We evaluate our model for its ability to detect damage in structures on which the model was not trained. These structures are similar to the training structure, but variable in material type and thickness. This evaluation will demonstrate how well learned dictionaries can both detect damage in a complex wavefield with multipath interference, and how well the learned model generalizes to structures with slight variations in properties. The damage detection and generalization results achieved using this empirical model are compared to similar results using both an analytical model and a support vector machine model

    Navigation and Control of Automated Guided Vehicle using Fuzzy Inference System and Neural Network Technique

    Get PDF
    Automatic motion planning and navigation is the primary task of an Automated Guided Vehicle (AGV) or mobile robot. All such navigation systems consist of a data collection system, a decision making system and a hardware control system. Artificial Intelligence based decision making systems have become increasingly more successful as they are capable of handling large complex calculations and have a good performance under unpredictable and imprecise environments. This research focuses on developing Fuzzy Logic and Neural Network based implementations for the navigation of an AGV by using heading angle and obstacle distances as inputs to generate the velocity and steering angle as output. The Gaussian, Triangular and Trapezoidal membership functions for the Fuzzy Inference System and the Feed forward back propagation were developed, modelled and simulated on MATLAB. The reserach presents an evaluation of the four different decision making systems and a study has been conducted to compare their performances. The hardware control for an AGV should be robust and precise. For practical implementation a prototype, that functions via DC servo motors and a gear systems, was constructed and installed on a commercial vehicle

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div
    corecore