3 research outputs found

    Design and implementation of a convolutional neural network on an edge computing smartphone for human activity recognition

    Get PDF
    Edge computing aims to integrate computing into everyday settings, enabling the system to be context-aware and private to the user. With the increasing success and popularity of deep learning methods, there is an increased demand to leverage these techniques in mobile and wearable computing scenarios. In this paper, we present an assessment of a deep human activity recognition system’s memory and execution time requirements, when implemented on a mid-range smartphone class hardware and the memory implications for embedded hardware. This paper presents the design of a convolutional neural network (CNN) in the context of human activity recognition scenario. Here, layers of CNN automate the feature learning and the influence of various hyper-parameters such as the number of filters and filter size on the performance of CNN. The proposed CNN showed increased robustness with better capability of detecting activities with temporal dependence compared to models using statistical machine learning techniques. The model obtained an accuracy of 96.4% in a five-class static and dynamic activity recognition scenario. We calculated the proposed model memory consumption and execution time requirements needed for using it on a mid-range smartphone. Per-channel quantization of weights and per-layer quantization of activation to 8-bits of precision post-training produces classification accuracy within 2% of floating-point networks for dense, convolutional neural network architecture. Almost all the size and execution time reduction in the optimized model was achieved due to weight quantization. We achieved more than four times reduction in model size when optimized to 8-bit, which ensured a feasible model capable of fast on-device inference

    结合轻量级麦穗检测模型和离线Android软件开发的田间小麦测产

    Get PDF
    The number of spikes per unit area is a key yield component for cereal crops such as wheat, which is popularly used in wheat research for crop improvement. With the fast maturity of smartphone imaging hardware and recent advances in image processing and lightweight deep learning techniques, it is possible to acquire high-resolution images using a smartphone camera, followed by the analysis of wheat spikes per unit area through pre-trained artificial intelligence algorithms. Then, by combining detected spike number with variety-based spikelet number and grain weight, it is feasible to carry out a near real-time estimation of yield potential for a given wheat variety in the field. This AI-driven approach becomes more powerful when a range of varieties are included in the training datasets, enabling an effective and valuable approach for yield-related studies in breeding, cultivation, and agricultural production. In this study, we present a novel smartphone-based software application that combines smartphone imaging, lightweight and embedded deep learning, with yield prediction algorithms and applied the software to wheat cultivation experiments. This open-source Android application is called YieldQuant-Mobile (YQ-M), which was developed to measure a key yield trait (i.e. spikes per unit area) and then estimate yield based on the trait. Through YQ-M and smartphones, we standardized the in-field imaging of wheat plots, streamlined the detection of spikes per unit area and the prediction of yield, without a prerequisite of in-field WiFi or mobile network. In this article, we introduce the YQ-M in detail, including: 1) the data acquisition designed to standardize the collection of wheat images from an overhead perspective using Android smartphones; 2) the data pre-processing of the acquired image to reduce the computational time for image analysis; 3) the extraction of wheat spike features through deep learning (i.e. YOLOV4) and transfer learning; 4) the application of TensorFlow.lite to transform the trained model into a lightweight MobileNetV2-YOLOV4 model, so that wheat spike detection can be operated on an Android smartphone; 5) finally, the establishment of a mobile phone database to incorporate historic datasets of key yield components collected from different wheat varieties into YQ-M using Android SDK and SQLite. Additionally, to ensure that our work could reach the broader research community, we developed a Graphical User Interface (GUI) for YQ-M, which contains: 1) the spike detection module that identifies the number of wheat spikes from a smartphone image; 2) the yield prediction module that invokes near real-time yield prediction using detected spike numbers and related parameters such as wheat varieties, place of production, accumulated temperature, and unit area. During our research, we have tested YQ-M with 80 representative varieties (240 one-square-meter plots, three replicates) selected from the main wheat producing areas in China. The computed accuracy, recall, average accuracy, and F1-score for the learning model are 84.43%, 91.05%, 91.96%, and 0.88, respectively. The coefficient of determination between YQ-M predicted yield values and post-harvest manual yield measurement is 0.839 (n=80 varieties, P<0.05; Root Mean Square Error=17.641 g/m2). The results suggest that YQ-M presented here has a high accuracy in the detection of wheat spikes per unit area and can produce a consistent yield prediction for the selected wheat varieties under complex field conditions. Furthermore, YQ-M can be easily accessed and expanded to incorporate new varieties and crop species, indicating the usability and extendibility of the software application. Hence, we believe that YQ-M is likely to provide a step change in our abilities to analyze yield-related components for different wheat varieties, a low-cost, accessible, and reliable approach that can contribute to smart breeding, cultivation and, potentially, agricultural production
    corecore