50,837 research outputs found

    Using agriculture for development: Supply- and demand-side approaches

    Get PDF
    For most poor countries of today, using agriculture for development is widely recognized as a promising strategy. Yet, in these countries, investment in agriculture has mostly been lagging relative to international norms and recommendations. Current wisdom on how to use agriculture for development is that it requires asset building for smallholder farmers, productivity growth in staple foods, an agricultural transformation (diversification of farming systems toward high value crops), and a rural transformation (value addition through rural non-farm activities linked to agriculture). This sequence has too often been hampered by extensive market and government failures. We outline a theory of change where the removal of market and government failures to use this Agriculture for Development strategy can be addressed through two contrasted and complementary approaches. One is from the “supply-side” where public and social agents (governments, international and bilateral development agencies, NGOs, donors) intervene to help farmers overcome the major constraints to adoption: liquidity, risk, information, and access to markets. The other is from the “demand-side” where private agents (entrepreneurs, producer organizations) create incentives for smallholder farmers to modernize through contracting and vertical coordination in value chains. We review the extensive literature that has explored ways of using Agriculture for Development through these two approaches. We conclude by noting that the supply-side approach has benefited from extensive research but met with limited success. The demand-side approach has promise, but received insufficient attention and is in need of additional rigorous research which we outline

    Performance Reproduction and Prediction of Selected Dynamic Loop Scheduling Experiments

    Full text link
    Scientific applications are complex, large, and often exhibit irregular and stochastic behavior. The use of efficient loop scheduling techniques in computationally-intensive applications is crucial for improving their performance on high-performance computing (HPC) platforms. A number of dynamic loop scheduling (DLS) techniques have been proposed between the late 1980s and early 2000s, and efficiently used in scientific applications. In most cases, the computing systems on which they have been tested and validated are no longer available. This work is concerned with the minimization of the sources of uncertainty in the implementation of DLS techniques to avoid unnecessary influences on the performance of scientific applications. Therefore, it is important to ensure that the DLS techniques employed in scientific applications today adhere to their original design goals and specifications. The goal of this work is to attain and increase the trust in the implementation of DLS techniques in present studies. To achieve this goal, the performance of a selection of scheduling experiments from the 1992 original work that introduced factoring is reproduced and predicted via both, simulative and native experimentation. The experiments show that the simulation reproduces the performance achieved on the past computing platform and accurately predicts the performance achieved on the present computing platform. The performance reproduction and prediction confirm that the present implementation of the DLS techniques considered both, in simulation and natively, adheres to their original description. The results confirm the hypothesis that reproducing experiments of identical scheduling scenarios on past and modern hardware leads to an entirely different behavior from expected

    The Fire and Smoke Model Evaluation Experiment—A Plan for Integrated, Large Fire–Atmosphere Field Campaigns

    Get PDF
    The Fire and Smoke Model Evaluation Experiment (FASMEE) is designed to collect integrated observations from large wildland fires and provide evaluation datasets for new models and operational systems. Wildland fire, smoke dispersion, and atmospheric chemistry models have become more sophisticated, and next-generation operational models will require evaluation datasets that are coordinated and comprehensive for their evaluation and advancement. Integrated measurements are required, including ground-based observations of fuels and fire behavior, estimates of fire-emitted heat and emissions fluxes, and observations of near-source micrometeorology, plume properties, smoke dispersion, and atmospheric chemistry. To address these requirements the FASMEE campaign design includes a study plan to guide the suite of required measurements in forested sites representative of many prescribed burning programs in the southeastern United States and increasingly common high-intensity fires in the western United States. Here we provide an overview of the proposed experiment and recommendations for key measurements. The FASMEE study provides a template for additional large-scale experimental campaigns to advance fire science and operational fire and smoke models

    Applying autonomy to distributed satellite systems: Trends, challenges, and future prospects

    Get PDF
    While monolithic satellite missions still pose significant advantages in terms of accuracy and operations, novel distributed architectures are promising improved flexibility, responsiveness, and adaptability to structural and functional changes. Large satellite swarms, opportunistic satellite networks or heterogeneous constellations hybridizing small-spacecraft nodes with highperformance satellites are becoming feasible and advantageous alternatives requiring the adoption of new operation paradigms that enhance their autonomy. While autonomy is a notion that is gaining acceptance in monolithic satellite missions, it can also be deemed an integral characteristic in Distributed Satellite Systems (DSS). In this context, this paper focuses on the motivations for system-level autonomy in DSS and justifies its need as an enabler of system qualities. Autonomy is also presented as a necessary feature to bring new distributed Earth observation functions (which require coordination and collaboration mechanisms) and to allow for novel structural functions (e.g., opportunistic coalitions, exchange of resources, or in-orbit data services). Mission Planning and Scheduling (MPS) frameworks are then presented as a key component to implement autonomous operations in satellite missions. An exhaustive knowledge classification explores the design aspects of MPS for DSS, and conceptually groups them into: components and organizational paradigms; problem modeling and representation; optimization techniques and metaheuristics; execution and runtime characteristics and the notions of tasks, resources, and constraints. This paper concludes by proposing future strands of work devoted to study the trade-offs of autonomy in large-scale, highly dynamic and heterogeneous networks through frameworks that consider some of the limitations of small spacecraft technologies.Postprint (author's final draft

    Inefficiencies in Digital Advertising Markets

    Get PDF
    Digital advertising markets are growing and attracting increased scrutiny. This article explores four market inefficiencies that remain poorly understood: ad effect measurement, frictions between and within advertising channel members, ad blocking, and ad fraud. Although these topics are not unique to digital advertising, each manifests in unique ways in markets for digital ads. The authors identify relevant findings in the academic literature, recent developments in practice, and promising topics for future research

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page
    • …
    corecore