18 research outputs found

    Scaling out Climate Smart Agriculture: Strategies and Guidelines for Smallholder Farming in Western Kenya

    Get PDF
    BACKGROUND The GIZ Advisory Service for Agricultural Research and Development (BEAF) in cooperation with GIZ Western Kenya and the Centre for Tropical Agriculture in Nairobi (CIAT) commissioned the Centre for Rural Development (SLE) to carry out this study. Kenya is a focus country of the German Federal Ministry for Economic Cooper-ation and Development (BMZ) SEWOH Initiative (One World, No Hunger), with GIZ as one of the implementing partners. Two SEWOH components are imple-mented in Western Kenya: soil protection and rehabilitation for food security and green innovation centres for the agricultural and food sector. Both projects show strong links to the concept of Climate Smart Agriculture (CSA). As part of the Consultative Group on International Agricultural Research (CGIAR), CIAT focuses on applied research on CSA. The study contributes to the development of strategies and guidelines to pro-mote the adoption of CSA techniques by smallholders in Western Kenya, i.e., in the counties of Siaya and Kakamega. BASIC SOCIO-ECONOMIC DATA ON KENYA With a Human Development Indicator (HDI) of 0.548, Kenya ranks 145th in the world (UNDP, 2015). Approximately 65% of Kenya’s population is employed in the agricultural sector. This showcases the tremendous significance agriculture holds for key issues at the heart of development: food security, poverty reduction, sus-tainable livelihoods. Kenya is a youthful country, where roughly half the population is 18 years of age or younger. Youth is concentrated in the rural areas, while their proportion in urban areas is significantly lower. Data from 2009 shows that almost 50 per cent of the population (45.2%) lives below the poverty line defined by the World Bank. Of the 38 million people in Kenya, 4.7 million are primarily engaged in small-scale agriculture and pastoral activities. The Kenyan population is unevenly distributed, with densities substantially higher in the central region around Nairobi and in Western Kenya (Wiesmann et al., 2014)

    Climate Smart Agriculture (CSA): Water Harvesting

    Get PDF

    Climate Smart Agriculture (CSA): Conservation Agriculture (CA)

    Get PDF

    Climate Smart Agriculture (CSA): Farmyard Compost

    Get PDF

    Climate Smart Agriculture (CSA): Climate Smart Agroforestry

    Get PDF

    Climate Smart Agriculture (CSA): Improved Fodder Management

    Get PDF

    Reinforcement Learning Based Power Grid Day-Ahead Planning and AI-Assisted Control

    Full text link
    The ongoing transition to renewable energy is increasing the share of fluctuating power sources like wind and solar, raising power grid volatility and making grid operation increasingly complex and costly. In our prior work, we have introduced a congestion management approach consisting of a redispatching optimizer combined with a machine learning-based topology optimization agent. Compared to a typical redispatching-only agent, it was able to keep a simulated grid in operation longer while at the same time reducing operational cost. Our approach also ranked 1st in the L2RPN 2022 competition initiated by RTE, Europe's largest grid operator. The aim of this paper is to bring this promising technology closer to the real world of power grid operation. We deploy RL-based agents in two settings resembling established workflows, AI-assisted day-ahead planning and realtime control, in an attempt to show the benefits and caveats of this new technology. We then analyse congestion, redispatching and switching profiles, and elementary sensitivity analysis providing a glimpse of operation robustness. While there is still a long way to a real control room, we believe that this paper and the associated prototypes help to narrow the gap and pave the way for a safe deployment of RL agents in tomorrow's power grids

    Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

    Full text link
    Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks can often be hierarchically decomposed into sub-tasks. A step in the Q-function can be associated with solving a sub-task, where the expectation of the return increases. RUDDER has been introduced to identify these steps and then redistribute reward to them, thus immediately giving reward if sub-tasks are solved. Since the problem of delayed rewards is mitigated, learning is considerably sped up. However, for complex tasks, current exploration strategies as deployed in RUDDER struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Typically the number of demonstrations is small and RUDDER's LSTM model as a deep learning method does not learn well. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER's safe exploration and lessons replay buffer. Second, we replace RUDDER's LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations as known from bioinformatics. Align-RUDDER inherits the concept of reward redistribution, which considerably reduces the delay of rewards, thus speeding up learning. Align-RUDDER outperforms competitors on complex artificial tasks with delayed reward and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Github: https://github.com/ml-jku/align-rudder, YouTube: https://youtu.be/HO-_8ZUl-U
    corecore