14,095 research outputs found

    Transfer Learning-Based Crack Detection by Autonomous UAVs

    Full text link
    Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection

    Neural Network Based Pattern Recognition in Visual Inspection System for Intergrated Circuit Mark Inspection

    Get PDF
    Industrial visual machine inspection system uses template or feature matching methods to locate or inspect parts or pattern on parts. These algorithms could not compensate for the change or variation on the inspected parts dynamically. Such problem was faced by a multinational semiconductor manufacturer. Therefore a study was conducted to introduce a new algorithm to inspect integrated circuit package markings. The main intend of the system was to verify if the marking can be read by humans. Algorithms that the current process uses however, was not capable in handling mark variations that was introduced by the marking process. A neural network based pattern recognition system was implemented and tested on images resembling the parts variations. Feature extraction was made simple by sectioning the region of interest (ROI) on the image into a specified (by the user) number of sections. The ratio of object pixels to the entire area of each section is calculated and used as an input into a feedforward neural network. Error-back propagation algorithm was used to train the network. The objective was to test the robustness of the network in handling pattern variations as well as the feasibility of implementing it on the production floor in tetms of execution speed. Two separate programme modules were written in C++; one for feature extraction and another for neural networks classifier. The feature extraction module was tested for its speed using various ROI sizes. The time taken for processing was round to be almost linearly related to the ROJ size and not at all effected by the number of sections. The minimum ROJ setting (200 X 200 pixels) was considerably slower at 5 5ms compared to what was required - 20ms. The neural networks c1assifier was very successful in classifying 1 3 different image patterns by learning from 4 training patterns. The classifier also clocked an average speed of 9.6ms which makes it feasible to implement it on the production floor. As a final say, it can be concluded that by carefully surveying the choices of hardware and software and its appropriate combination, this system can be seriously considered for implementation on the semiconductor production floor

    FPGA applications in signal and image processing

    Get PDF
    The increasing demand for real-time and smart digital signal processing (DSP) systems, calls for a better platform for their implementation. Most of these systems (e.g. digital image processing) are highly parallelisable, memory and processor hungry; such that the increasing performance of today�s general-purpose microprocessors are no longer able to handle them. A highly parallel hardware architecture, which offers enough memory resources, offers an alternative for such DSP implementations

    PCA-RECT: An Energy-efficient Object Detection Approach for Event Cameras

    Full text link
    We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive properties. However, event-based object recognition systems are far behind their frame-based counterparts in terms of accuracy. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional dictionary representation when hardware resources are limited to implement dimensionality reduction. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance and relevance to state-of-the-art algorithms. Additionally, we verified the object detection method and real-time FPGA performance in lab settings under non-controlled illumination conditions with limited training data and ground truth annotations.Comment: Accepted in ACCV 2018 Workshops, to appea
    corecore