4 research outputs found

    Spatial-temporal Multi-Task Learning for Within-field Cotton Yield Prediction

    Full text link
    Understanding and accurately predicting within-field spatial variability of crop yield play a key role in site-specific management of crop inputs such as irrigation water and fertilizer for optimized crop production. However, such a task is challenged by the complex interaction between crop growth and environmental and managerial factors, such as climate, soil conditions, tillage, and irrigation. In this paper, we present a novel Spatial-temporal Multi-Task Learning algorithms for within-field crop yield prediction in west Texas from 2001 to 2003. This algorithm integrates multiple heterogeneous data sources to learn different features simultaneously, and to aggregate spatial-temporal features by introducing a weighted regularizer to the loss functions. Our comprehensive experimental results consistently outperform the results of other conventional methods, and suggest a promising approach, which improves the landscape of crop prediction research fields

    Estimating crop yields with remote sensing and deep learning

    Full text link
    Increasing the accuracy of crop yield estimates may allow improvements in the whole crop production chain, allowing farmers to better plan for harvest, and for insurers to better understand risks of production, to name a few advantages. To perform their predictions, most current machine learning models use NDVI data, which can be hard to use, due to the presence of clouds and their shadows in acquired images, and due to the absence of reliable crop masks for large areas, especially in developing countries. In this paper, we present a deep learning model able to perform pre-season and in-season predictions for five different crops. Our model uses crop calendars, easy-to-obtain remote sensing data and weather forecast information to provide accurate yield estimates.Comment: 6 pages, 2 figures. Accepted for publication at 2020 Latin American GRSS & ISPRS Remote Sensing Conferenc

    Self-boosted Time-series Forecasting with Multi-task and Multi-view Learning

    Full text link
    A robust model for time series forecasting is highly important in many domains, including but not limited to financial forecast, air temperature and electricity consumption. To improve forecasting performance, traditional approaches usually require additional feature sets. However, adding more feature sets from different sources of data is not always feasible due to its accessibility limitation. In this paper, we propose a novel self-boosted mechanism in which the original time series is decomposed into multiple time series. These time series played the role of additional features in which the closely related time series group is used to feed into multi-task learning model, and the loosely related group is fed into multi-view learning part to utilize its complementary information. We use three real-world datasets to validate our model and show the superiority of our proposed method over existing state-of-the-art baseline methods

    Contrastive Cross-site Learning with Redesigned Net for COVID-19 CT Classification

    Full text link
    The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Extensive experiments show that our approach consistently improves the performances on both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.Comment: Published as a journal paper at IEEE J-BHI; code and dataset are available at https://github.com/med-air/Contrastive-COVIDNe
    corecore