69,667 research outputs found
Fleet Prognosis with Physics-informed Recurrent Neural Networks
Services and warranties of large fleets of engineering assets is a very
profitable business. The success of companies in that area is often related to
predictive maintenance driven by advanced analytics. Therefore, accurate
modeling, as a way to understand how the complex interactions between operating
conditions and component capability define useful life, is key for services
profitability. Unfortunately, building prognosis models for large fleets is a
daunting task as factors such as duty cycle variation, harsh environments,
inadequate maintenance, and problems with mass production can lead to large
discrepancies between designed and observed useful lives. This paper introduces
a novel physics-informed neural network approach to prognosis by extending
recurrent neural networks to cumulative damage models. We propose a new
recurrent neural network cell designed to merge physics-informed and
data-driven layers. With that, engineers and scientists have the chance to use
physics-informed layers to model parts that are well understood (e.g., fatigue
crack growth) and use data-driven layers to model parts that are poorly
characterized (e.g., internal loads). A simple numerical experiment is used to
present the main features of the proposed physics-informed recurrent neural
network for damage accumulation. The test problem consist of predicting fatigue
crack length for a synthetic fleet of airplanes subject to different mission
mixes. The model is trained using full observation inputs (far-field loads) and
very limited observation of outputs (crack length at inspection for only a
portion of the fleet). The results demonstrate that our proposed hybrid
physics-informed recurrent neural network is able to accurately model fatigue
crack growth even when the observed distribution of crack length does not match
with the (unobservable) fleet distribution.Comment: Data and codes (including our implementation for both the multi-layer
perceptron, the stress intensity and Paris law layers, the cumulative damage
cell, as well as python driver scripts) used in this manuscript are publicly
available on GitHub at https://github.com/PML-UCF/pinn. The data and code are
released under the MIT Licens
Development Of An Optical Character Recognition Function System For Integrated Circuit Label Classification Using Neural Network
Presently, many Integrated Circuit (IC) manufacturers are applying machine vision solution to ensure the legibility of characters printed on the top surface of IC Package. In template matching technique there are about 10% of ICs rejected due to very little defects in quality of marking even though the characters are correct. The objective of this project is to develop an IC inspection system that has optical character recognition function system by using neural network. Feed forward back propagation neural network method is used in this task. The system developed is able to read 36 characters ( A to Z and 0 to 9) printed on ICs. The recognition time in template matching is 650μs. In neural network technique, by feeding-in Raw Data, Feature, and Hybrid (combination of Raw Data and Feature), they clock 18.22μs, 15.64μs and 19.32μs respectively. The recognition accuracy is 96.26% for the former and 98.25%, 98.83% and 99.61% for the latter. This is a solution to minimise rejects of ICs in manufacturing process. The reduction of processing time in manufacturing process contributes to the increase of productivity. Moreover, application of this technique gives a solution to avoid mismatch of parts (ICs) in manufacturing lots
Deep learning based vision inspection system for remanufacturing application
Deep Learning has emerged as a state-of-the-art learning technique across a wide range of applications, including image recognition, localisation, natural language processing, prediction and forecasting systems. With significant applicability, Deep Learning is continually seeking other new fronts of applications for these techniques. This research is the first to apply Deep Learning algorithm to inspection in remanufacturing. Inspection is a key process in remanufacturing, which is currently an expensive manual operation in the remanufacturing process that depends on human operator expertise, in most cases. This research further proposes an automation framework based on Deep Learning algorithm for automating this inspection process. The proposed technique offers the potential to eliminate human factors in inspection, save cost, increase throughput and improve precision. This paper presents a novel vision-based inspection system on Deep Convolution Neural Network (DCNN) for three types of defects, namely pitting, surface abrasion and cracks by distinguishing between these surface defected parts. The materials used for this feasibility study were 100cm x 150cm mild steel plate material, purchased locally, and captured using a web webcam USB camera of 0.3 megapixels. The performance of this preliminary study indicates that the DCNN can classify with up to 100% accuracy on validation data and above 96% accuracy on a live video feed, by using 80% of the sample dataset for training and the remaining 20% for testing. Therefore, in the remanufacturing parts inspection, the DCNN approach has high potential as a method that could surpass the current technologies, especially for accuracy and speed. This preliminary study demonstrates that Deep Learning techniques have the potential to revolutionise inspection in remanufacturing. This research offers valuable insight into these opportunities, serving as a starting point for future applications of Deep Learning algorithms to remanufacturing
Neural Network Based Pattern Recognition in Visual Inspection System for Intergrated Circuit Mark Inspection
Industrial visual machine inspection system uses template or feature matching
methods to locate or inspect parts or pattern on parts. These algorithms could not
compensate for the change or variation on the inspected parts dynamically. Such
problem was faced by a multinational semiconductor manufacturer. Therefore a
study was conducted to introduce a new algorithm to inspect integrated circuit
package markings. The main intend of the system was to verify if the marking can be
read by humans. Algorithms that the current process uses however, was not capable
in handling mark variations that was introduced by the marking process. A neural
network based pattern recognition system was implemented and tested on images
resembling the parts variations. Feature extraction was made simple by sectioning the region of interest (ROI)
on the image into a specified (by the user) number of sections. The ratio of object
pixels to the entire area of each section is calculated and used as an input into a
feedforward neural network. Error-back propagation algorithm was used to train the
network. The objective was to test the robustness of the network in handling pattern
variations as well as the feasibility of implementing it on the production floor in
tetms of execution speed.
Two separate programme modules were written in C++; one for feature
extraction and another for neural networks classifier. The feature extraction module
was tested for its speed using various ROI sizes. The time taken for processing was
round to be almost linearly related to the ROJ size and not at all effected by the
number of sections. The minimum ROJ setting (200 X 200 pixels) was considerably
slower at 5 5ms compared to what was required - 20ms. The neural networks
c1assifier was very successful in classifying 1 3 different image patterns by learning
from 4 training patterns. The classifier also clocked an average speed of 9.6ms
which makes it feasible to implement it on the production floor. As a final say, it can
be concluded that by carefully surveying the choices of hardware and software and its
appropriate combination, this system can be seriously considered for implementation
on the semiconductor production floor
A Novel Optical/digital Processing System for Pattern Recognition
This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network
- …