21 research outputs found

    Characterization and performance of rice husk as additive in green ceramic water filter fabricated by slip-casting

    Get PDF
    Purpose The purpose of this study is to investigate the using of rice husk (RH) which is a green material derived from agricultural waste with the ability to absorb heavy metal. It has been used in wastewater treatment. In this research, a kaolin-based green ceramic water filter (CWF) incorporated with two different additives (RH and zeolite-based RH ash [RHA]) was successfully fabricated. Design/methodology/approach The weight ratio of kaolin:additive was varied (90:10, 80:20 and 70:30) and fabricated via the slip-casting technique. The green CWFs were dried (60°C for 1 h), followed by sintering (1,200°C). Findings The green CWF of kaolin:RH with a weight ratio of 70:30 showed the best properties and satisfactory performance with a porous cross-section microstructure, highest porous area (4.58 ”m2), good structure, lowest shrinkage (8.00%), highest porosity (45.10%), lowest density (1.79 g cm−3), highest water absorption (55.50%) and hardness (241.40 Hv). This green CWF has also achieved good permeability (42.00 L m−2h−1) and removal of the textile dye (27.88%). The satisfactory characterization and good textile dye removal performance (75.47%) were also achieved from green CWF with kaolin:zeolite at a weight ratio of 80:20. Research limitations/implications This research is focused on green CWF and zeolite at a certain amount with the specific characterization analysis methods. Practical implications The use of low-cost waste materials to treat dye wastewater from agricultural by-products/wastes sources in treating the dye will enhance the using of green material. Social implications Avoiding the waste sludge that can pollute the environment can create a health issue. The use of low-cost waste materials to treat dye wastewater from agricultural by-products/wastes sources in treating the dye can avoid the waste sludge that can pollute the environment and create serious health issue. Originality/value All the kaolin-based green CWFs incorporated with two different additives (RH and zeolite-based RHA) fabricated using a simple slip-casting technique have shown the potential to be used as a filter in wastewater treatment applications

    Quality grading of soybean seeds using image analysis

    Get PDF
    Image processing and machine learning technique are modified to use the quality grading of soybean seeds. Due to quality grading is a very important process for the soybean industry and soybean farmers. There are still some critical problems that need to be overcome. Therefore, the key contributions of this paper are first, a method to eliminate shadow noise for segment soybean seeds of high quality. Second, a novel approach for color feature which robust for illumination changes to reduces problem of color difference. Third, an approach to discover a set of feature and to form classifier model to strengthen the discrimination power for soybean classification. This study used background subtraction to reduce shadow appearing in the captured image and proposed a method to extract color feature based on robustness for illumination changes which was H components in HSI model. We proposed classifier model using combination of the color histogram of H components in HSI model and GLCM statistics to represent the color and texture features to strengthen the discrimination power of soybean grading and to solve shape variance in each soybean seeds class. SVM classifiers are generated to identify normal seeds, purple seeds, green seeds, wrinkled seeds, and other seed types. We conducted experiments on a dataset composed of 1,320 soybean seeds and 6,600 seed images with varies in brightness levels. The experimental results achieved accuracies of 99.2%, 97.9%, 100%, 100%, 98.1%, and 100% for overall seeds, normal seeds, purple seeds, green seeds, wrinkled seeds, and other seeds, respectivel

    Automatic Classification of Seafloor Image Data by Geospatial Texture Descriptors

    Get PDF
    A novel approach for automatic context-sensitive classification of spatially distributed image data is introduced. The proposed method targets applications of seafloor habitat mapping but is generally not limited to this domain or use case. Spatial context information is incorporated in a two-stage classification process, where in the second step a new descriptor for patterns of feature class occurrence according to a generically defined classification scheme is applied. The method is based on supervised machine learning, where numerous state-of-the-art approaches are applicable. The descriptor computation originates from texture analysis in digital image processing. Patterns of feature class occurrence are perceived as a texture-like phenomenon and the descriptors are therefore denoted by Geospatial Texture Descriptors. The proposed method was extensively validated based on a set of more than 4000 georeferenced video mosaics acquired at the Haakon Mosby Mud Volcano north-west of Norway recorded during cruise ARK XIX3b of the German research vessel Polarstern. The underlying classification scheme was derived from a scheme developed for manual annotation of the same dataset applied in the course of Jerosch [2006]. Features of interest are related to methane discharge at mud volcanoes, which are considered a significant source of methane emission. In the experimental evaluation, based on the prepared training and test data, a major improvement of the classification precision compared to local classification as well as classification based on the raw data from the local spatial context was achieved by the application of the proposed method. The classification precision was particularly improved for rarely occurring classes. In a further comparison with annotated data available from Jerosch [2006] the regional setting of the investigation area obtained by the application of the proposed method was found almost equivalent to the results of an experienced scientist

    A VISION-BASED QUALITY INSPECTION SYSTEM FOR FABRIC DEFECT DETECTION AND CLASSIFICATION

    Get PDF
    Published ThesisQuality inspection of textile products is an important issue for fabric manufacturers. It is desirable to produce the highest quality goods in the shortest amount of time possible. Fabric faults or defects are responsible for nearly 85% of the defects found by the garment industry. Manufacturers recover only 45 to 65% of their profits from second or off-quality goods. There is a need for reliable automated woven fabric inspection methods in the textile industry. Numerous methods have been proposed for detecting defects in textile. The methods are generally grouped into three main categories according to the techniques they use for texture feature extraction, namely statistical approaches, spectral approaches and model-based approaches. In this thesis, we study one method from each category and propose their combinations in order to get improved fabric defect detection and classification accuracy. The three chosen methods are the grey level co-occurrence matrix (GLCM) from the statistical category, the wavelet transform from the spectral category and the Markov random field (MRF) from the model-based category. We identify the most effective texture features for each of those methods and for different fabric types in order to combine them. Using GLCM, we identify the optimal number of features, the optimal quantisation level of the original image and the optimal intersample distance to use. We identify the optimal GLCM features for different types of fabrics and for three different classifiers. Using the wavelet transform, we compare the defect detection and classification performance of features derived from the undecimated discrete wavelet and those derived from the dual-tree complex wavelet transform. We identify the best features for different types of fabrics. Using the Markov random field, we study the performance for fabric defect detection and classification of features derived from different models of Gaussian Markov random fields of order from 1 through 9. For each fabric type we identify the best model order. Finally, we propose three combination schemes of the best features identified from the three methods and study their fabric detection and classification performance. They lead generally to improved performance as compared to the individual methods, but two of them need further improvement

    Flexible Hardware Architectures for Retinal Image Analysis

    Get PDF
    RÉSUMÉ Des millions de personnes autour du monde sont touchĂ©es par le diabĂšte. Plusieurs complications oculaires telle que la rĂ©tinopathie diabĂ©tique sont causĂ©es par le diabĂšte, ce qui peut conduire Ă  une perte de vision irrĂ©versible ou mĂȘme la cĂ©citĂ© si elles ne sont pas traitĂ©es. Des examens oculaires complets et rĂ©guliers par les ophtalmologues sont nĂ©cessaires pour une dĂ©tection prĂ©coce des maladies et pour permettre leur traitement. Comme solution prĂ©ventive, un protocole de dĂ©pistage impliquant l'utilisation d'images numĂ©riques du fond de l'Ɠil a Ă©tĂ© adoptĂ©. Cela permet aux ophtalmologistes de surveiller les changements sur la rĂ©tine pour dĂ©tecter toute prĂ©sence d'une maladie oculaire. Cette solution a permis d'obtenir des examens rĂ©guliers, mĂȘme pour les populations des rĂ©gions Ă©loignĂ©es et dĂ©favorisĂ©es. Avec la grande quantitĂ© d'images rĂ©tiniennes obtenues, des techniques automatisĂ©es pour les traiter sont devenues indispensables. Les techniques automatisĂ©es de dĂ©tection des maladies des yeux ont Ă©tĂ© largement abordĂ©es par la communautĂ© scientifique. Les techniques dĂ©veloppĂ©es ont atteint un haut niveau de maturitĂ©, ce qui a permis entre autre le dĂ©ploiement de solutions en tĂ©lĂ©mĂ©decine. Dans cette thĂšse, nous abordons le problĂšme du traitement de volumes Ă©levĂ©s d'images rĂ©tiniennes dans un temps raisonnable dans un contexte de dĂ©pistage en tĂ©lĂ©mĂ©decine. Ceci est requis pour permettre l'utilisation pratique des techniques dĂ©veloppĂ©es dans le contexte clinique. Dans cette thĂšse, nous nous concentrons sur deux Ă©tapes du pipeline de traitement des images rĂ©tiniennes. La premiĂšre Ă©tape est l'Ă©valuation de la qualitĂ© de l'image rĂ©tinienne. La deuxiĂšme Ă©tape est la segmentation des vaisseaux sanguins rĂ©tiniens. L’évaluation de la qualitĂ© des images rĂ©tinienne aprĂšs acquisition est une tĂąche primordiale au bon fonctionnement de tout systĂšme de traitement automatique des images de la rĂ©tine. Le rĂŽle de cette Ă©tape est de classifier les images acquises selon leurs qualitĂ©s, et demander une nouvelle acquisition en cas d’image de mauvaise qualitĂ©. Plusieurs algorithmes pour Ă©valuer la qualitĂ© des images rĂ©tiniennes ont Ă©tĂ© proposĂ©s dans la littĂ©rature. Cependant, mĂȘme si l'accĂ©lĂ©ration de cette tĂąche est requise en particulier pour permettre la crĂ©ation de systĂšmes mobiles de capture d'images rĂ©tiniennes, ce sujet n'a pas encore Ă©tĂ© abordĂ© dans la littĂ©rature. Dans cette thĂšse, nous ciblons un algorithme qui calcule les caractĂ©ristiques des images pour permettre leur classification en mauvaise, moyenne ou bonne qualitĂ©. Nous avons identifiĂ© le calcul des caractĂ©ristiques de l'image comme une tĂąche rĂ©pĂ©titive qui nĂ©cessite une accĂ©lĂ©ration. Nous nous sommes intĂ©ressĂ©s plus particuliĂšrement Ă  l’accĂ©lĂ©ration de l’algorithme d’encodage Ă  longueur de sĂ©quence (Run-Length Matrix – RLM). Nous avons proposĂ© une premiĂšre implĂ©mentation complĂštement logicielle mise en Ɠuvre sous forme d’un systĂšme embarquĂ© basĂ© sur la technologie Zynq de Xilinx. Pour accĂ©lĂ©rer le calcul des caractĂ©ristiques, nous avons conçu un co-processeur capable de calculer les caractĂ©ristiques en parallĂšle implĂ©mentĂ© sur la logique programmable du FPGA Zynq. Nous avons obtenu une accĂ©lĂ©ration de 30,1 × pour la tĂąche de calcul des caractĂ©ristiques de l’algorithme RLM par rapport Ă  son implĂ©mentation logicielle sur la plateforme Zynq. La segmentation des vaisseaux sanguins rĂ©tiniens est une tĂąche clĂ© dans le pipeline du traitement des images de la rĂ©tine. Les vaisseaux sanguins et leurs caractĂ©ristiques sont de bons indicateurs de la santĂ© de la rĂ©tine. En outre, leur segmentation peut Ă©galement aider Ă  segmenter les lĂ©sions rouges, indicatrices de la rĂ©tinopathie diabĂ©tique. Plusieurs techniques de segmentation des vaisseaux sanguins rĂ©tiniens ont Ă©tĂ© proposĂ©es dans la littĂ©rature. Des architectures matĂ©rielles ont Ă©galement Ă©tĂ© proposĂ©es pour accĂ©lĂ©rer certaines de ces techniques. Les architectures existantes manquent de performances et de flexibilitĂ© de programmation, notamment pour les images de haute rĂ©solution. Dans cette thĂšse, nous nous sommes intĂ©ressĂ©s Ă  deux techniques de segmentation du rĂ©seau vasculaire rĂ©tinien, la technique du filtrage adaptĂ© et la technique des opĂ©rateurs de ligne. La technique de filtrage adaptĂ© a Ă©tĂ© ciblĂ©e principalement en raison de sa popularitĂ©. Pour cette technique, nous avons proposĂ© deux architectures diffĂ©rentes, une architecture matĂ©rielle personnalisĂ©e mise en Ɠuvre sur FPGA et une architecture basĂ©e sur un ASIP. L'architecture matĂ©rielle personnalisĂ©e a Ă©tĂ© optimisĂ©e en termes de surface et de dĂ©bit de traitement pour obtenir des performances supĂ©rieures par rapport aux implĂ©mentations existantes dans la littĂ©rature. Cette implĂ©mentation est plus efficace que toutes les implĂ©mentations existantes en termes de dĂ©bit. Pour l'architecture basĂ©e sur un processeur Ă  jeu d’instructions spĂ©cialisĂ© (Application-Specific Instruction-set Processor – ASIP), nous avons identifiĂ© deux goulets d'Ă©tranglement liĂ©s Ă  l'accĂšs aux donnĂ©es et Ă  la complexitĂ© des calculs de l'algorithme. Nous avons conçu des instructions spĂ©cifiques ajoutĂ©es au chemin de donnĂ©es du processeur. L'ASIP a Ă©tĂ© rendu 7.7 × plus rapide par rapport Ă  son architecture de base. La deuxiĂšme technique pour la segmentation des vaisseaux sanguins est l'algorithme dĂ©tecteur de ligne multi-Ă©chelle (Multi-Scale Ligne Detector – MSLD). L'algorithme MSLD est choisi en raison de ses performances et de son potentiel Ă  dĂ©tecter les petits vaisseaux sanguins. Cependant, l'algorithme fonctionne en multi-Ă©chelle, ce qui rend l’algorithme gourmand en mĂ©moire. Pour rĂ©soudre ce problĂšme et permettre l'accĂ©lĂ©ration de son exĂ©cution, nous avons proposĂ© un algorithme efficace en terme de mĂ©moire, conçu et implĂ©mentĂ© sur FPGA. L'architecture proposĂ©e a rĂ©duit de façon drastique les exigences de l’algorithme en terme de mĂ©moire en rĂ©utilisant les calculs et la co-conception logicielle/matĂ©rielle. Les deux architectures matĂ©rielles proposĂ©es pour la segmentation du rĂ©seau vasculaire rĂ©tinien ont Ă©tĂ© rendues flexibles pour pouvoir traiter des images de basse et de haute rĂ©solution. Ceci a Ă©tĂ© rĂ©alisĂ© par le dĂ©veloppement d'un compilateur spĂ©cifique capable de gĂ©nĂ©rer une description HDL de bas niveau de l'algorithme Ă  partir d'un ensemble de paramĂštres. Le compilateur nous a permis d’optimiser les performances et le temps de dĂ©veloppement. Dans cette thĂšse, nous avons introduit deux architectures qui sont, au meilleur de nos connaissances, les seules capables de traiter des images Ă  la fois de basse et de haute rĂ©solution.----------ABSTRACT Millions of people all around the world are affected by diabetes. Several ocular complications such as diabetic retinopathy are caused by diabetes, which can lead to irreversible vision loss or even blindness if not treated. Regular comprehensive eye exams by eye doctors are required to detect the diseases at earlier stages and permit their treatment. As a preventing solution, a screening protocol involving the use of digital fundus images was adopted. This allows eye doctors to monitor changes in the retina to detect any presence of eye disease. This solution made regular examinations widely available, even to populations in remote and underserved areas. With the resulting large amount of retinal images, automated techniques to process them are required. Automated eye detection techniques are largely addressed by the research community, and now they reached a high level of maturity, which allows the deployment of telemedicine solutions. In this thesis, we are addressing the problem of processing a high volume of retinal images in a reasonable time. This is mandatory to allow the practical use of the developed techniques in a clinical context. In this thesis, we focus on two steps of the retinal image pipeline. The first step is the retinal image quality assessment. The second step is the retinal blood vessel segmentation. The evaluation of the quality of the retinal images after acquisition is a primary task for the proper functioning of any automated retinal image processing system. The role of this step is to classify the acquired images according to their quality, which will allow an automated system to request a new acquisition in case of poor quality image. Several algorithms to evaluate the quality of retinal images were proposed in the literature. However, even if the acceleration of this task is required, especially to allow the creation of mobile systems for capturing retinal images, this task has not yet been addressed in the literature. In this thesis, we target an algorithm that computes image features to allow their classification to bad, medium or good quality. We identified the computation of image features as a repetitive task that necessitates acceleration. We were particularly interested in accelerating the Run-Length Matrix (RLM) algorithm. We proposed a first fully software implementation in the form of an embedded system based on Xilinx's Zynq technology. To accelerate the features computation, we designed a co-processor able to compute the features in parallel, implemented on the programmable logic of the Zynq FPGA. We achieved an acceleration of 30.1× over its software implementation for the features computation part of the RLM algorithm. Retinal blood vessel segmentation is a key task in the pipeline of retinal image processing. Blood vessels and their characteristics are good indicators of retina health. In addition, their segmentation can also help to segment the red lesions, indicators of diabetic retinopathy. Several techniques have been proposed in the literature to segment retinal blood vessels. Hardware architectures have also been proposed to accelerate blood vessel segmentation. The existing architectures lack in terms of performance and programming flexibility, especially for high resolution images. In this thesis, we targeted two techniques, matched filtering and line operators. The matched filtering technique was targeted mainly because of its popularity. For this technique, we proposed two different architectures, a custom hardware architecture implemented on FPGA, and an Application Specific Instruction-set Processor (ASIP) based architecture. The custom hardware architecture area and timing were optimized to achieve higher performances in comparison to existing implementations. Our custom hardware implementation outperforms all existing implementations in terms of throughput. For the ASIP based architecture, we identified two bottlenecks related to data access and computation intensity of the algorithm. We designed two specific instructions added to the processor datapath. The ASIP was made 7.7× more efficient in terms of execution time compared to its basic architecture. The second technique for blood vessel segmentation is the Multi-Scale Line Detector (MSLD) algorithm. The MSLD algorithm is selected because of its performance and its potential to detect small blood vessels. However, the algorithm works at multiple scales which makes it memory intensive. To solve this problem and allow the acceleration of its execution, we proposed a memory-efficient algorithm designed and implemented on FPGA. The proposed architecture reduces drastically the memory requirements of the algorithm by reusing the computations and SW/HW co-design. The two hardware architectures proposed for retinal blood vessel segmentation were made flexible to be able to process low and high resolution images. This was achieved by the development of a specific compiler able to generate low-level HDL descriptions of the algorithm from a set of the algorithm parameters. The compiler enabled us to optimize performance and development time. In this thesis, we introduce two novel architectures which are, to the best of our knowledge, the only ones able to process both low and high resolution images

    FPGA in image processing supported by IOPT-Flow

    Get PDF
    Image processing is widely used in the most diverse industries. One of the tools widely used to perform image processing is the OpenCV library. Although the implementation of image processing algorithms can be made in software, it is also possible to implement image processing algorithms in hardware. In some cases, the execution time can be smaller than the execution time achieved in software. This work main goal is to evaluate the use of VHDL, DS-Pnets, and IOPT-Flow to develop image processing systems in hardware, in FPGA-based platforms. To enable it, a validation platform was developed. A set of image processing algorithms were specified, during this work, in VHDL and/or in DS-Pnets. These were validated using the IOPT-Flow validation tool and/or the Xilinx ISE Simulator. The automatic VHDL code generator from IOPT-Flow framework was used to translate DS-Pnet models into the implementation code. The FPGA-based implementations were compared with software implementations, supported by the OpenCV library. The created DS-Pnet models were added into a folder of the IOPT-Flow editor, to create an image processing library. It was possible to conclude that the DS-Pnets and their associated tools, IOPT-Flow tools, support the development of image processing systems. These tools, which simplify the development of image processing systems, are available online at http://gres.uninova.pt/iopt-flow/

    Ripeness Classification Of Oil Palm Fruit To Ensure Optimum Quantity Of Oil Using Image Processing Techniques

    Get PDF
    The project involves on detecting optimum quantity and quality of oil based on the ripeness of oil palm fruit using suitable Digital Image Processing Techniques. The objective of this project is to develop an easy and flexible system where user can use the system to classify the maturity of fruit using CCD camera and Matlab-Image Processing Toolbox

    Hybrid FPGA: Architecture and Interface

    No full text
    Hybrid FPGAs (Field Programmable Gate Arrays) are composed of general-purpose logic resources with different granularities, together with domain-specific coarse-grained units. This thesis proposes a novel hybrid FPGA architecture with embedded coarse-grained Floating Point Units (FPUs) to improve the floating point capability of FPGAs. Based on the proposed hybrid FPGA architecture, we examine three aspects to optimise the speed and area for domain-specific applications. First, we examine the interface between large coarse-grained embedded blocks (EBs) and fine-grained elements in hybrid FPGAs. The interface includes parameters for varying: (1) aspect ratio of EBs, (2) position of the EBs in the FPGA, (3) I/O pins arrangement of EBs, (4) interconnect flexibility of EBs, and (5) location of additional embedded elements such as memory. Second, we examine the interconnect structure for hybrid FPGAs. We investigate how large and highdensity EBs affect the routing demand for hybrid FPGAs over a set of domain-specific applications. We then propose three routing optimisation methods to meet the additional routing demand introduced by large EBs: (1) identifying the best separation distance between EBs, (2) adding routing switches on EBs to increase routing flexibility, and (3) introducing wider channel width near the edge of EBs. We study and compare the trade-offs in delay, area and routability of these three optimisation methods. Finally, we employ common subgraph extraction to determine the number of floating point adders/subtractors, multipliers and wordblocks in the FPUs. The wordblocks include registers and can implement fixed point operations. We study the area, speed and utilisation trade-offs of the selected FPU subgraphs in a set of floating point benchmark circuits. We develop an optimised coarse-grained FPU, taking into account both architectural and system-level issues. Furthermore, we investigate the trade-offs between granularities and performance by composing small FPUs into a large FPU. The results of this thesis would help design a domain-specific hybrid FPGA to meet user requirements, by optimising for speed, area or a combination of speed and area
    corecore