23 research outputs found
Effect of image compression using fast fourier transformation and discrete wavelet transformation on transfer learning wafer defect image classification
Automated inspection machines for wafer defects usually captured thousands of images on a large scale to preserve the detail of defect features. However, most transfer learning architecture requires smaller images as input images. Thus, proper compression is required to preserve the defect features whilst maintaining an acceptable classification accuracy. This paper reports on the effect of image compression using Fast Fourier Transformation and Discrete Wavelet Transformation on transfer learning wafer defect image classification. A total of 500 images with 5 classes with 4 defect classes and 1 non-defect class were split to 60:20:20 ratio for training, validating and testing using InceptionV3 and Logistic Regression classifier. However, the input images were compressed using Fast Fourier Transformation and Discrete Wavelet Transformation using 4 level decomposition and Debauchies 4 wavelet family. The images were compressed by 50%, 75%, 90%, 95%, and 99%. As a result, the Fast Fourier Transformation compression show an increase from 89% to 94% in classification accuracy up to 95% compression, while Discrete Wavelet Transformation shows consistent classification accuracy throughout albeit diminishing image quality. From the experiment, it can be concluded that FFT and DWT image compression can be a reliable method for image compression for grayscale image classification as the image memory space drop 56.1% while classification accuracy increased by 5.6% with 95% FFT compression and memory space drop 55.6% while classification accuracy increased 2.2% with 50% DWT compression
Time-series classification vegetables in detecting growth rate using machine learning
IoT based innovative irrigation management systems can help in attaining optimum water-resource utilisation in the exactness farming landscape. This paper presents a clustering of unsupervised learning based innovative system to forecast the irrigation requirements of a field using the sensing of a ground parameter such as soil moisture, light intensity, temperature, and humidity. The entire system has been established and deployed. The sensor node data is gained through a serial monitor from Arduino IDE software collected directly and saved using the computer. Orange and MATLAB software is used to apply machine learning for the visualisation, and the decision support system delivers real-time information insights based on the analysis of sensors data. The plants organise either water or non-water includes weather conditions to gain various types of results. kNN reached 100.0%, SVM achieved 99.0% owhile Naïve Bayes achieved 87.40%
High growth rate using new type demand feeding system with image processing program and fish behavior.
Demand feeding system serves feeds to fish, when fish switch on feeders. Although demand feeding system has advantages, it still has problems, e.g. hierarchy problem
of fish school and system learning period problem for fish. New type of demand feeding system was develop to solve these problems using fish behaviour and image
processing system. At first, behaviour experiment was conducted using the image processing software Roborealm to obtain the optimum parameter for computer program. Through the behaviour experiment, two typical behaviour patterns were detected. When fish was hungry, fish group came to the water surface (H: parameter >63%), and when fish was not hungry, fish came to bottom (L: <45%) of fish tank. These two parameters were obtained and were put into the computer program in the workstation. HD Wi-Fi camera continuously recorded the real time fish behaviour in fish
tank, and when fish group came to above the “H”, then the command was sent from workstation to microcomputer to send the order to feeding device to feed on. The results of feeding experiment showed this system could provide pellets to fish day and night time equally following fish behaviour. This feeding system could provide the
pellets to fish based on fish requirements. The growth rate was higher than other feeding system (timer feeder and demand feeder using an infrared light sensor)
Classification of plant health (capsicum frutescens) normalize differences vegetation index using image processing
The evolution of food manufacturer in global contribute in national income of the country. Agriculture has been a part of everyone’s life which result in providing food become the building block of every human being. Malaysia is only country experiencing deteriorating development contribute (25.9%) agriculture in Gross Deficient Domestics Product (GDP) while others are fishing (12%), rubber (3.0%) and forestry & logging (6.3%), livestock (15.3%). In line with the development of technology in present century, a lot of methods and technique introduced to upgrowth agriculture sector by focusing to the plant health. The aims of this study are to classify of agriculture plant health through NDVI using image processing. Image processing is a technique representing operations and observation on an image. The images of plant will be captured in this investigation to obtain a photo without (Infra-Red) IR imaging filter. Some of steps must be perform which also include of using multi-function software to gain NDVI values of plant. The main objective in this study is to classify plant health by performing the vegetation index of plant and identify the best machine learning to be applied
Landmark Navigation in Low Illumination Using Omnidirectional Camera
Landmark based vision navigation for mobile robot is critically dependent on successful recognition of landmarks. Landmarks, artificial or natural, are subjected to sufficient illumination in order for a successful recognition. Sufficient illumination is even more critical when the mobile robot is used for indoor. In this paper, experiments were conducted to recognize artificial landmarks using omnidirectional vision under low illumination. The objective of this paper is to demonstrate that landmark navigation in low illumination can be conducted without illumination invariance step and without images distortion correction. This landmark recognition performance thus demonstrates the robustness of landmarks especially under low light condition. The landmarks used were standard (ISO15417) Code-128 barcodes. The barcodes are placed besides turning machine and the illuminance is measured by a luxmeter on each barcode
Landmark Tracking Using Unrectified Omniirectional Image for an Automated Guided Vehicle
In this paper, a study on landmark tracking using unrectified omnidirectional image for automated guided vehicle is presented. Omnidirectional image from a catadioptric camera may appear distorted against the height of an object. However, for a flat object on the floor, the distortion is negligible thus can be advantageous for on-the-ground landmark; Landmark used in this study was Code-128 standard barcode. The barcode is modified to suit the detection process where the barcode adopted cyan instead of white background and bears a red strip on top for orientation. The image processing can directly begin tracking landmarks when no distortion rectification in the image was required. We adopted a topological map approach where the automated guided vehicle moves from landmark to landmark. Experiments were conducted on a small four wheel drive, four wheel steering automated guided vehicle. The results were measured through number of successful consequent tracking of the landmark
A Comparison of Two Approaches for Collision Avoidance of an Automated Guided Vehicle Using Monocular Vision
In this paper a comparison of two approaches for collision avoidance of an automated guided vehicle (AGV) using monocular vision is presented. The first approach is by floor sampling. The floor where the AGV operates, is usually monotone. Thus, by sampling the floor, the information can be used to search similar pixels and establish the floor plane in its vision. Therefore any other objects are considered as obstacles and should be avoided. The second approach employs the Canny edge detection method. The Canny edge detection method allows accurate detection, close to real object, and minimum false detection by image noise. Using this method, every edge detected is considered to be part of an obstacle. This approach tries to avoid the nearest obstacle to its vision. Experiments are conducted in a control environment. The monocular camera is mounted on an ERP-42 Unmanned Solution robot platform and is the sole sensor providing information for the robot about its environmen
Bar Code Detection Using Omnidirectional Vision for Automated Guided Vehicle Navigation
In this paper, a study on detectability and readability of barcodes using omnidirectional vision system for automated guided vehicle is presented. Images from omnidirectional camera are known to be distorted against the height of the object. We present an algorithm for detecting and reading barcodes successfully without correcting the image distortion. Experiments were conducted both when the AGV was in motion and at rest. Three contributing factors were identified for successful barcodes detection and reading
A Comparison of Two Approaches for Collision Avoidance of an Automated Guided Vehicle Using Monocular Vision
In this paper a comparison of two approaches for collision avoidance of an automated guided vehicle (AGV) using monocular vision is presented. The first approach is by floor sampling. The floor where the AGV operates, is usually monotone. Thus, by sampling the floor, the information can be used to search similar pixels and establish the floor plane in its vision. Therefore any other objects are considered as obstacles and should be avoided. The second approach employs the Canny edge detection method. The Canny edge detection method allows accurate detection, close to real object, and minimum false detection by image noise. Using this method, every edge detected is considered to be part of an obstacle. This approach tries to avoid the nearest obstacle to its vision. Experiments are conducted in a control environment. The monocular camera is mounted on an ERP-42 Unmanned Solution robot platform and is the sole sensor providing information for the robot about its environmen
Landmark guided trajectory of an automated guided vehicle using omnidirectional vision
The omnidirectional camera is very useful in tracking a landmark for automated guided vehicle (AGV). The omnidirectional camera can sense object 360° around the AGV thus eliminating the need of camera panning or robotic reorientation. The image produced by the omnidirectional camera is usually highly distorted. However, one feature of the image captured by an omnidirectional camera is that the distortion only against the height of the object. Object with negligible height has negligible image distortion. With this feature in mind, this research investigates the trajectory generated from an AGV towards an identified and recognized landmark using omnidirectional camera without rectifying the distortion into perspective view. The research work involves landmark identification and recognition using image processing step. The landmark used, was enlarged to four different sizes, code-128 barcodes with cyan background and red orientation marker. The landmark identification and recognition is processed from the image captured by the omnidirectional camera. The camera was mounted on the AGV and remain as the sole range sensor for the AGV to sense its environment. Three fundamental trajectories used in robotics navigation namely straight, left turn, and right turn were experimented to present the trajectory of an AGV guided by a landmark. The AGV was modelled using Bicycle Model. The trajectory of the AGV is then simulated using MATLAB/Simulink. Next, the simulation work is validated with the experimental work. A proportional control is applied in the experimental work for the AGV move toward the landmark. All experiments were conducted in a laboratory environment with controlled illumination. The work thus demonstrate that the image captured using omnidirectional camera can be used to identify and recognize a landmark without going through any typical omnidirectional image unwarping process into a perspective view. The important navigational information for the vision-based-AGV can be extracted directly from the camera feed