6 research outputs found

    Utilizing hierarchical extreme learning machine based reinforcement learning for object sorting

    Get PDF
    Automatic and intelligent object sorting is an important task that can sort different objects without human intervention, using the robot arm to carry each object from one location to another. These objects vary in colours, shapes, sizes and orientations. Many applications, such as fruit and vegetable grading, flower grading, and biopsy image grading depend on sorting for a structural arrangement. Traditional machine learning methods, with extracting handcrafted features, are used for this task. Sometimes, these features are not discriminative because of the environmental factors, such as light change. In this study, Hierarchical Extreme Learning Machine (HELM) is utilized as an unsupervised feature learning to learn the object observation directly, and HELM was found to be robust against external change. Reinforcement learning (RL) is used to find the optimal sorting policy that maps each object image to the object’s location. The reason for utilizing RL is lack of output labels in this automatic task. The learning is done sequentially in many episodes. At each episode, the accuracy of sorting is increased to reach the maximum level at the end of learning. The experimental results demonstrated that the proposed HELM-RL sorting can provide the same accuracy as the labelled supervised HELM method after many episodes

    Detection of Disease on Corn Plants Using Convolutional Neural Network Methods

    Get PDF
    Deep Learning is still an interesting issue and is still widely studied. In this study Deep Learning was used for the diagnosis of corn plant disease using the Convolutional Neural Network (CNN) method, with a total dataset of 3.854 images of diseases in corn plants, which consisted of three types of corn diseases namely Common Rust, Gray Leaf Spot, and Northern Leaf Blight. With an accuracy of 99%, in detecting disease in corn plants

    Unobtrusive hand gesture recognition using ultra-wide band radar and deep learning

    Get PDF
    Hand function after stroke injuries is not regained rapidly and requires physical rehabilitation for at least 6 months. Due to the heavy burden on the healthcare system, assisted rehabilitation is prescribed for a limited time, whereas so-called home rehabilitation is offered. It is therefore essential to develop robust solutions that facilitate monitoring while preserving the privacy of patients in a home-based setting. To meet these expectations, an unobtrusive solution based on radar sensing and deep learning is proposed. The multi-input multi-output convolutional eXtra trees (MIMO-CxT) is a new deep hybrid model used for hand gesture recognition (HGR) with impulse-radio ultra-wide band (IR-UWB) radars. It consists of a lightweight architecture based on a multi-input convolutional neural network (CNN) used in a hybrid configuration with extremely randomized trees (ETs). The model takes data from multiple sensors as input and processes them separately. The outputs of the CNN branches are concatenated before the prediction is made by the ETs. Moreover, the model uses depthwise separable convolution layers, which reduce computational cost and learning time while maintaining high performance. The model is evaluated on a publicly available dataset of gestures collected by three IR-UWB radars and achieved an average accuracy of 98.86%

    Data Collection and Machine Learning Methods for Automated Pedestrian Facility Detection and Mensuration

    Get PDF
    Large-scale collection of pedestrian facility (crosswalks, sidewalks, etc.) presence data is vital to the success of efforts to improve pedestrian facility management, safety analysis, and road network planning. However, this kind of data is typically not available on a large scale due to the high labor and time costs that are the result of relying on manual data collection methods. Therefore, methods for automating this process using techniques such as machine learning are currently being explored by researchers. In our work, we mainly focus on machine learning methods for the detection of crosswalks and sidewalks from both aerial and street-view imagery. We test data from these two viewpoints individually and with an ensemble method that we refer to as our “dual-perspective prediction model”. In order to obtain this data, we developed a data collection pipeline that combines crowdsourced pedestrian facility location data with aerial and street-view imagery from Bing Maps. In addition to the Convolutional Neural Network used to perform pedestrian facility detection using this data, we also trained a segmentation network to measure the length and width of crosswalks from aerial images. In our tests with a dual-perspective image dataset that was heavily occluded in the aerial view but relatively clear in the street view, our dual-perspective prediction model was able to increase prediction accuracy, recall, and precision by 49%, 383%, and 15%, respectively (compared to using a single perspective model based on only aerial view images). In our tests with satellite imagery provided by the Mississippi Department of Transportation, we were able to achieve accuracies as high as 99.23%, 91.26%, and 93.7% for aerial crosswalk detection, aerial sidewalk detection, and aerial crosswalk mensuration, respectively. The final system that we developed packages all of our machine learning models into an easy-to-use system that enables users to process large batches of imagery or examine individual images in a directory using a graphical interface. Our data collection and filtering guidelines can also be used to guide future research in this area by establishing standards for data quality and labelling

    Multi-Input Convolutional Neural Network for Flower Grading

    No full text
    corecore