314 research outputs found
Recommended from our members
M2U-net: Effective and efficient retinal vessel segmentation for real-world applications
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery
Embedded Machine-Learning For Variable-Rate Fertiliser Systems: A Model-Driven Approach To Precision Agriculture
Efficient use of fertilisers, in particular the use of Nitrogen (N), is one of the rate-limiting factors in meeting global food production requirements. While N is a key driver in increasing crop yields, overuse can also lead to negative environmental and health impacts. It has been suggested that Variable-Rate Fertiliser (VRF) techniques may help to reduce excessive N applications. VRF seeks to spatially vary fertiliser input based on estimated crop requirements, however a major challenge in the operational deployment of VRF systems is the automated processing of large amounts of sensor data in real-time. Machine Learning (ML) algorithms have shown promise in their ability to process these large, high-velocity data streams, and to produce accurate predictions. The newly developed Fuzzy Boxes (FB) algorithm has been designed with VRF applications in mind, however no publicly available software implementation currently exists. Therefore, development of a prototype implementation of FB forms a component of this work. This thesis will also employ a Hardware-in-the-Loop (HWIL) testing methodology using a potential target device in order to simulate a real-world VRF deployment environment. By using this environment simulation, two existing ML algorithms (Artificial Neural Network (ANN) and Support Vector Machine (SVM)) can be compared against the prototype implementation of FB for applicability to VRF applications. It will be shown that all tested algorithms could potentially be suitable for high-speed VRF when measured on prediction time and various accuracy metrics. All algorithms achieved higher than 84.5% accuracy, with FB20 reaching 87.21%. Prediction times were highly varied; the fastest average predictor was an ANN (16.64μs), while the slowest was FB20(502.77μs). All average prediction times were fast enough to achieve a spatial resolution of 31 mm when operating at 60 m/s, making all tested algorithms fast enough predictors for VRF applications
Land Cover Classification Implemented in FPGA
The main focus of the dissertation is Land Use/Land Cover Classification, implemented
in FPGA, taking advantage of its parallelism, improving time between mathematical
operations. The classifiers implemented will be Decision Tree and Minimum Distance
reviewed in State of the Art Chapter. The results obtained pretend to contribute in fire
prevention and fire combat, due to the information they extract about the fields where
the implementation is applied to.
The region of interest will Sado estuary, with future application to Mação, Santarém,
inserted in FORESTER project, that had a lot of its area burnt in 2017 fires. Also, the data
acquired from the implementation can help to update the previous land classification of
the region.
Image processing can be performed in a variety of platforms, such as CPU, GPU and
FPGAs, with different advantages and disadvantages for each one. Image processing can
be referred as massive data processing data in a visual context, due to its large amount of
information per photo.
Several studies had been made in accelerate classification techniques in hardware, but
not so many have been applied in the same context of this dissertation. The outcome of
this work shows the advantages of high data processing in hardware, in time and accuracy
aspects.
How the classifiers handle the region of study and can right classify it will be seen in
this dissertation and the major advantages of accelerating some parts or the full classifier
in hardware. The results of implementing the classifiers in hardware, done in the Zynq
UltraScale+ MPSoC board, will be compared against the equivalent CPU implementation
Real time vision-based implementation of plant disease identification system on FPGA
Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. To overcome that loss, we implemented a computer vision based real time system that can identify the type of plant diseases. Computer vision-based applications are computationally intensive and time consuming, so FPGA-based implementation is proposed to have a real time identification of plant diseases. In this paper an image processing algorithm is proposed for identifying two types of disease in Potato leaves. The proposed algorithm works well on images taken under different luminance conditions. The hardware/software-based implementation of the proposed algorithm is done on Xilinx ZYNQ SoC FPGA. Results show that our proposed algorithm achieves an accuracy of up to 90%, whereas the hardware implementation takes 0.095 seconds achieving a performance gain of 76.8 times as compared to the software implementation
Simulation and implementation of novel deep learning hardware architectures for resource constrained devices
Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems
- …