779 research outputs found

    Hillman Oaks Development Plan

    Get PDF

    On-Farm Corn Fungicide Trials

    Get PDF
    An application of fungicide to corn has become a popular input with many farmers in Iowa. The effect of fungicide on corn yield, however, can vary from year to year. Environmental conditions, such as rainfall and temperature, likely are the main factors for differences in how a fungicide affects corn yield because these factors influence disease development and crop growth. Because environmental conditions vary from one year to the next, it is difficult to predict how and when to use a fungicide. Compilation of trial data over many years could help identify factors associated with fungicide response in corn

    On-Farm Soybean Seed Treatment Trials

    Get PDF
    Seed treatments offer protection from fungi, insects, and nematodes to germinating seeds and developing seedlings. All legumes require the appropriate rhizobium bacteria in the soil in order for nitrogen fixation to occur. Inoculating the seed with an inoculum can insure the crop can take advantage of this nitrogen fixation

    DIP-RL: Demonstration-Inferred Preference Learning in Minecraft

    Full text link
    In machine learning for sequential decision-making, an algorithmic agent learns to interact with an environment while receiving feedback in the form of a reward signal. However, in many unstructured real-world settings, such a reward signal is unknown and humans cannot reliably craft a reward signal that correctly captures desired behavior. To solve tasks in such unstructured and open-ended environments, we present Demonstration-Inferred Preference Reinforcement Learning (DIP-RL), an algorithm that leverages human demonstrations in three distinct ways, including training an autoencoder, seeding reinforcement learning (RL) training batches with demonstration data, and inferring preferences over behaviors to learn a reward function to guide RL. We evaluate DIP-RL in a tree-chopping task in Minecraft. Results suggest that the method can guide an RL agent to learn a reward function that reflects human preferences and that DIP-RL performs competitively relative to baselines. DIP-RL is inspired by our previous work on combining demonstrations and pairwise preferences in Minecraft, which was awarded a research prize at the 2022 NeurIPS MineRL BASALT competition, Learning from Human Feedback in Minecraft. Example trajectory rollouts of DIP-RL and baselines are located at https://sites.google.com/view/dip-rl.Comment: Paper accepted at The Many Facets of Preference Learning Workshop at the International Conference on Machine Learning (ICML), Honolulu, Hawaii, USA, 202

    Priority-Based PlaybookTM Tasking for Unmanned System Teams

    Get PDF
    We are developing real-time planning and control systems that allow a single human operator to control a team of unmanned aerial vehicles (UAVs). If the operator requests more tasks than can be immediately addressed by the available UAVs, our planning system must choose which goals to try to achieve, and which to postpone for later effort. To make this decision-making easily understandable and controllable, we allow the user to assign strict priorities to goals, ensuring that if a goal is assigned the highest priority, the system will use every resource available to try to build a successful plan to achieve that goal. In this paper we show how unique features of the SHOP2 hierarchical task network planner permit an elegant implementation of this priority queue behavior. Although this paper is primarily about the technique itself, rather than SHOP2’s performance, we assess the scalability of this priority queue approach and discuss potential directions for improvement, as well as more general forms of meta-control within SHOP2 domains. I
    • …
    corecore