331 research outputs found

    Maize in Nepal: Production Systems, Constraints, and Priorities for Research

    Get PDF
    Crop Production/Industries, Research and Development/Tech Change/Emerging Technologies,

    Parallelizing superFine

    Get PDF
    The estimation of the Tree of Life, a rooted binary tree representing how all extant species evolved from a common ancestor, is one of the grand challenges of modern biology. Research groups around the world are attempting to estimate evolutionary trees on particular sets of species (typically clades, or rooted subtrees), in the hope that a final "supertree" can be produced from these smaller estimated trees through the addition of a "scaffold" tree of randomly sampled taxa from the tree of life. However, supertree estimation is itself a computationally challenging problem, because the most accurate trees are produced by running heuristics for NP-hard problems. In this paper we report on a study in which we parallelize SuperFine, the currently most accurate and efficient supertree estimation method. We explore performance of these parallel implementations on simulated data-sets with 1000 taxa and biological data-sets with up to 2,228 taxa. Our study reveals aspects of SuperFine that limit the speed-ups that are possible through the type of outer-loop parallelism we exploit.(undefined

    Processor Allocation for Optimistic Parallelization of Irregular Programs

    Full text link
    Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: potentially interfering tasks are launched dynamically, and the runtime system detects conflicts between concurrent activities, aborting and rolling back conflicting tasks. However, parallelism in irregular algorithms is very complex. In a regular algorithm like dense matrix multiplication, the amount of parallelism can usually be expressed as a function of the problem size, so it is reasonably straightforward to determine how many processors should be allocated to execute a regular algorithm of a certain size (this is called the processor allocation problem). In contrast, parallelism in irregular algorithms can be a function of input parameters, and the amount of parallelism can vary dramatically during the execution of the irregular algorithm. Therefore, the processor allocation problem for irregular algorithms is very difficult. In this paper, we describe the first systematic strategy for addressing this problem. Our approach is based on a construct called the conflict graph, which (i) provides insight into the amount of parallelism that can be extracted from an irregular algorithm, and (ii) can be used to address the processor allocation problem for irregular algorithms. We show that this problem is related to a generalization of the unfriendly seating problem and, by extending Tur\'an's theorem, we obtain a worst-case class of problems for optimistic parallelization, which we use to derive a lower bound on the exploitable parallelism. Finally, using some theoretically derived properties and some experimental facts, we design a quick and stable control strategy for solving the processor allocation problem heuristically.Comment: 12 pages, 3 figures, extended version of SPAA 2011 brief announcemen

    Environmental Impacts of Productivity-Enhancing Crop Research: A Critical Review

    Get PDF
    Study by Drs. Mywish Maredia and Prabhu Pingali reviewing evidence of the possible negative impacts of productivity-enhancing technologies on the environment. Identifying "negative land savings" as a suitable measure of negative impact, the authors find salinity problems associated with irrigation as the most complete available index of land savings lost, and together with less precise measures of the impacts of intensification and monocultures, estimate global land savings lost to be on the order of 90-100 million hectares. This is several hundreds of millions of hectares less than the positive land savings attributable to CGIAR research on eight mandated crops (see "Environmental Impacts of the CGIAR: An Assessment.")A treatment of efforts by the CGIAR and NARS to mitigate negative impacts on the environment follows, focusing on the development of pest-resistant varieties and integrated pest management practices which reduce the need for pesticides. While this was identified clearly as an area of significant advances, farmers' adoption of these varieties and practices was not matched by a concomitant reduction in pesticide use - which represented a major failure in disseminating the implications of the new technologies for pesticide requirements. The study ends by pointing to the complexities of relating environmental impacts to agricultural research, given the many factors other than research that contribute to these impacts. Adding to this difficulty of attributing the causes of environmental impacts to research, the authors describe a common tendency of literature to conflate the green revolution with the larger phenomenon of agricultural intensification

    Lessons from a pandemic to repurpose India's agricultural policy

    Get PDF
    To transform the food systems in India following the COVID-19 pandemic, the government will urgently need to repurpose existing agricultural policies. India’s policy regimes like the Minimum Support Price (MSP) and the Public Distribution Systems (PDS), coupled with subsidies on irrigation, power, and farm inputs, are skewed in favour of staple crops like rice and wheat. Even though some climate-resilient and nutritious cereals like sorghum and millets get some support pricing, this seems ineffective as the policy is biased in favour of the “big two” staples

    A RANDOMIZED, DOUBLE-BLIND, PLACEBO-CONTROLLED, PARALLEL GROUP CLINICAL STUDY TO EVALUATE THE ANALGESIC EFFECT OF AQUEOUS EXTRACT OF TERMINALIA CHEBULA, A PROPRIETARY CHROMIUM COMPLEX, AND THEIR COMBINATION IN SUBJECTS WITH JOINT DISCOMFORT

    Get PDF
    Objective: To evaluate the analgesic effect of an aqueous extract of Terminalia chebula (TCE), a proprietary chromium complex (PCC), and theircombination in subjects with joint discomfort.Methods: A total of 100 patients with knee joint discomfort were randomized into five treatment groups - TCE 500 mg BID, TCE 500 mg BID+PCC400 µg OD, PCC 400 µg OD alone, placebo, and TCE 250 mg BID, for 12 weeks in a double-blinded manner. Assessment of symptoms of knee joint painand discomfort was done by modified Western Ontario and McMaster Universities Arthritis Index (mWOMAC) and knee swelling index (KSI); visualanalog scale (VAS) was used for subjective assessment of pain, stiffness, and disability. Statistical analysis was done with GraphPad Prism 6.Results: Absolute reduction in mWOMAC score in TCE 500 mg (19.82±8.35), TCE 500 mg+PCC 400 µg (13.10±5.69), PCC 400 µg (8.30±3.81), placebo(2.45±3.07), and TCE 250 mg (10.47±4.43), respectively, at the end of 12 weeks as compared to the baseline values. Absolute reduction in KSI inTCE 500 mg (28.95±16.82), TCE 500 mg+PCC 400 µg (19.14±9.50), PCC 400 µg (12.7±4.86), placebo (10.03±3.8), and TCE 250 mg (18.24±6.86),respectively, at the end of 12 weeks as compared to the baseline values (p<0.001). Similar results were seen with VAS assessments for pain, stiffness,and disability. All the treatments were well tolerated.Conclusion: TCE and PCC reduce joint discomfort.Keywords: Terminalia chebula extract, Proprietary chromium complex, Western Ontario and McMaster Universities Arthritis Index

    Telescopic hybrid fast solver for 3D elliptic problems with point singularities

    Get PDF
    This paper describes a telescopic solver for two dimensional h adaptive grids with point singularities. The input for the telescopic solver is an h refined two dimensional computational mesh with rectangular finite elements. The candidates for point singularities are first localized over the mesh by using a greedy algorithm. Having the candidates for point singularities, we execute either a direct solver, that performs multiple refinements towards selected point singularities and executes a parallel direct solver algorithm which has logarithmic cost with respect to refinement level. The direct solvers executed over each candidate for point singularity return local Schur complement matrices that can be merged together and submitted to iterative solver. In this paper we utilize a parallel multi-thread GALOIS solver as a direct solver. We use Incomplete LU Preconditioned Conjugated Gradients (ILUPCG) as an iterative solver. We also show that elimination of point singularities from the refined mesh reduces significantly the number of iterations to be performed by the ILUPCG iterative solver

    Fast parallel IGA-ADS solver for time-dependent Maxwell's equations

    Get PDF
    We propose a simulator for time-dependent Maxwell's equations with linear computational cost. We employ B-spline basis functions as considered in the isogeometric analysis (IGA). We focus on non-stationary Maxwell's equations defined on a regular patch of elements. We employ the idea of alternating-directions splitting (ADS) and employ a second-order accurate time-integration scheme for the time-dependent Maxwell's equations in a weak form. After discretization, the resulting stiffness matrix exhibits a Kronecker product structure. Thus, it enables linear computational cost LU factorization. Additionally, we derive a formulation for absorbing boundary conditions (ABCs) suitable for direction splitting. We perform numerical simulations of the scattering problem (traveling pulse wave) to verify the ABC. We simulate the radiation of electromagnetic (EM) waves from the dipole antenna. We verify the order of the time integration scheme using a manufactured solution problem. We then simulate magnetotelluric measurements. Our simulator is implemented in a shared memory parallel machine, with the GALOIS library supporting the parallelization. We illustrate the parallel efficiency with strong and weak scalability tests corresponding to non-stationary Maxwell simulations.EXPERTIA (KK-2021/00048) SIGZE (KK-2021/00095) PDC2021-121093-I0

    Exploring aflatoxin contamination and household-level exposure risk in diverse Indian food systems

    Get PDF
    The present study sought to identify household risk factors associated with aflatoxin contamination within and across diverse Indian food systems and to evaluate their utility in risk modeling. Samples (n = 595) of cereals, pulses, and oil seeds were collected from 160 households across four diverse districts of India and analyzed for aflatoxin B1 using enzyme-linked immunosorbent assay (ELISA). Demographic information, food and cropping systems, food management behaviors, and storage environments were profiled for each household. An aflatoxin detection risk index was developed based on household-level features and validated using a repeated 5-fold cross-validation approach. Across districts, between 30–80% of households yielded at least one contaminated sample. Aflatoxin B1 detection rates and mean contamination levels were highest in groundnut and maize, respectively, and lower in other crops. Landholding had a positive univariate effect on household aflatoxin detection, while storage conditions, product source, and the number of protective behaviors used by households did not show significant effects. Presence of groundnut, post-harvest grain washing, use of sack-based storage systems, and cultivation status (farming or non-farming) were identified as the most contributive variables in stepwise logistic regression and were used to generate a household-level risk index. The index had moderate classification accuracy (68% sensitivity and 62% specificity) and significantly correlated with village-wise aflatoxin detection rates. Spatial analysis revealed utility of the index for identifying at-risk localities and households. This study identified several key features associated with aflatoxin contamination in Indian food systems and demonstrated that household characteristics are substantially predictive of aflatoxin risk

    Quasi-optimal elimination trees for 2D grids with singularities

    Get PDF
    We construct quasi-optimal elimination trees for 2D finite element meshes with singularities.These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal.We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O(log(Ne log(Ne)), where N e is the number of elements in the mesh.We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments
    • …
    corecore