4 research outputs found

    Analysis of Hardware Accelerated Deep Learning and the Effects of Degradation on Performance

    Get PDF
    As convolutional neural networks become more prevalent in research and real world applications, the need for them to be faster and more robust will be a constant battle. This thesis investigates the effect of degradation being introduced to an image prior to object recognition with a convolutional neural network. As well as experimenting with methods to reduce the degradation and improve performance. Gaussian smoothing and additive Gaussian noise are both analyzed degradation models within this thesis and are reduced with Gaussian and Butterworth masks using unsharp masking and smoothing, respectively. The results show that each degradation is disruptive to the performance of YOLOv3, with Gaussian smoothing producing a mean average precision of less than 20% and Gaussian noise producing a mean average precision as low as 0%. Reduction methods applied to the data give results of 1%-21% mean average precision increase over the baseline, varying based on the degradation model. These methods are also applied to an 8-bit quantized implementation of YOLOv3, which is intended to run on a Xilinx ZCU104 FPGA, which showed to be as robust as the oating point network, with results within 2% mean average precision of the oating point network. With the ZCU104 being able to process images of 416x416 at 25 frames per second which is comparable to a NVIDIA 2080 RTX, FPGAs are a viable solution to computing object detection on the edge. In conclusion, this thesis shows that degradation causes performance of a convolutional neural network (quantized and oating point) to lose accuracy to a level that the network is unable to accurately predict objects. However, the degradation can be reduced, and in most cases can elevate the performance of the network by using computer vision techniques to reduce the noise within the image

    Pose-invariant, model-based object recognition, using linear combination of views and Bayesian statistics

    Get PDF
    This thesis presents an in-depth study on the problem of object recognition, and in particular the detection of 3-D objects in 2-D intensity images which may be viewed from a variety of angles. A solution to this problem remains elusive to this day, since it involves dealing with variations in geometry, photometry and viewing angle, noise, occlusions and incomplete data. This work restricts its scope to a particular kind of extrinsic variation; variation of the image due to changes in the viewpoint from which the object is seen. A technique is proposed and developed to address this problem, which falls into the category of view-based approaches, that is, a method in which an object is represented as a collection of a small number of 2-D views, as opposed to a generation of a full 3-D model. This technique is based on the theoretical observation that the geometry of the set of possible images of an object undergoing 3-D rigid transformations and scaling may, under most imaging conditions, be represented by a linear combination of a small number of 2-D views of that object. It is therefore possible to synthesise a novel image of an object given at least two existing and dissimilar views of the object, and a set of linear coefficients that determine how these views are to be combined in order to synthesise the new image. The method works in conjunction with a powerful optimization algorithm, to search and recover the optimal linear combination coefficients that will synthesize a novel image, which is as similar as possible to the target, scene view. If the similarity between the synthesized and the target images is above some threshold, then an object is determined to be present in the scene and its location and pose are defined, in part, by the coefficients. The key benefits of using this technique is that because it works directly with pixel values, it avoids the need for problematic, low-level feature extraction and solution of the correspondence problem. As a result, a linear combination of views (LCV) model is easy to construct and use, since it only requires a small number of stored, 2-D views of the object in question, and the selection of a few landmark points on the object, the process which is easily carried out during the offline, model building stage. In addition, this method is general enough to be applied across a variety of recognition problems and different types of objects. The development and application of this method is initially explored looking at two-dimensional problems, and then extending the same principles to 3-D. Additionally, the method is evaluated across synthetic and real-image datasets, containing variations in the objects’ identity and pose. Future work on possible extensions to incorporate a foreground/background model and lighting variations of the pixels are examined

    Automatic analysis of malaria infected red blood cell digitized microscope images

    Get PDF
    Malaria is one of the three most serious diseases worldwide, affecting millions each year, mainly in the tropics where the most serious illnesses are caused by Plasmodium falciparum. This thesis is concerned with the automatic analysis of images of microscope slides of Giemsa stained thin-films of such malaria infected blood so as to segment red-blood cells (RBCs) from the background plasma, to accurately and reliably count the cells, identify those that were infected with a parasite, and thus to determine the degree of infection or parasitemia. Unsupervised techniques were used throughout owing to the difficulty of obtaining large quantities of training data annotated by experts, in particular for total RBC counts. The first two aims were met by optimisation of Fisher discriminants. For RBC segmentation, a well-known iterative thresholding method due originally to Otsu (1979) was used for scalar features such as the image intensity and a novel extension of the algorithm developed for multi-dimensional, colour data. Performance of the algorithms was evaluated and compared via ROC analysis and their convergence properties studied. Ways of characterising the variability of the image data and, if necessary of mitigating it, were discussed in theory. The size distribution of the objects segmented in this way indicated that optimisation of a Fisher discriminant could be further used for classifying objects as small artefacts, singlet RBCs, doublets, or triplets etc. of adjoining cells provided optimisation was via a global search. Application of constraints on the relationships between the sizes of singlet and multiplet RBCs led to a number of tests that enabled clusters of cells to be reliably identified and accurate total RBC counts to be made. Development of an application to make such counts could be very useful both in research laboratories and in improving treatment of malaria. Unfortunately, the very small number of pixels belonging to parasite infections mean that it is difficult to segment parasite objects and thus to identify infected RBCs and to determine the parasitemia. Preliminary attempts to do so by similar, unsupervised means using Fischer discriminants, even when applied in a hierarchical manner, though suggestive that it may ultimately be possible to develop such a system remain on the evidence currently available, inconclusive. Appendices give details of material from old texts no longer easily accessible
    corecore