22 research outputs found

    Exploiting the genetic diversity of maize using a combined metabolomic, enzyme activity profiling, and metabolic modelling approach to link leaf physiology to kernel yield

    Get PDF
    A combined metabolomic, biochemical, fluxomic, and metabolic modeling approach was developed using 19 genetically distant maize (Zea mays) lines from Europe and America. Considerable differences were detected between the lines when leaf metabolic profiles and activities of the main enzymes involved in primary metabolism were compared. During grain filling, the leaf metabolic composition appeared to be a reliable marker, allowing a classification matching the genetic diversity of the lines. During the same period, there was a significant correlation between the genetic distance of the lines and the activities of enzymes involved in carbon metabolism, notably glycolysis. Although large differences were observed in terms of leaf metabolic fluxes, these variations were not tightly linked to the genome structure of the lines. Both correlation studies and metabolic network analyses allowed the description of a maize ideotype with a high grain yield potential. Such an ideotype is characterized by low accumulation of soluble amino acids and carbohydrates in the leaves and high activity of enzymes involved in the C4 photosynthetic pathway and in the biosynthesis of amino acids derived from glutamate. Chlorogenates appear to be important markers that can be used to select for maize lines that produce larger kernels

    Should the Law Governing Maritime Areas in the Arctic Adapt to Changing Climatic Circumstances?

    Full text link
    The legal regime of the Arctic maritime areas has for a long time remained on the backburner of international norm creating activities. This can primarily be explained by the inhospitable climate which created natural barriers for human activities and imposed limits on the usefulness of the available technology.At present, however, climate change seems to be responsible for a marked heating up, not only of the mere physical environment of the Arctic, but also of the political tensions concerning the exact legal regime to be applied in the region.This chapter provides a close examination of the present-day legal status of the Arctic. Since almost all territorial claims have been settled in the area, the present contribution will only focus on the Arctic water areas with a special emphasis on navigation. Starting from what seems to have finally become a generally accepted legal cornerstone for appreciating the legal status of these waters, namely the 1982 United Nations Convention on the Law of the Sea, the chapter will first examine how climate change impacts on this legal regime. Subsequently, the vulnerability as well as the adaptive capacity of the 1982 Convention will be examined. Before drawing some conclusions, ways and means will be looked at for this existing legal system to better meet the new challenges that climate change poses for Arctic waters

    Tiling and memory reuse for sequences of nested loops

    Full text link
    International audienceOur aim is to minimize the electrical energy used during the execution of signal processing applications that are a sequence of loop nests. This energy is mostly used to transfer data among various levels of memory hierarchy. To minimize these transfers, we transform these programs by using simultaneously loop permutation, tiling, loop fusion with shifting and memory reuse. Each input nest uses a stencil of data produced in the previous nest and the references to the same array are equal, up to a shift. All transformations described in this paper have been implemented in pips, our optimizing compiler and cache misses reductions have been measured

    Loop-carried code placement

    Full text link
    Abstract. Traditional code optimization techniques treat loops as nonpredictable structures and do not consider expressions containing array accesses for optimization. We show that the polyhedron model can be used to implement code placement techniques that exploit equalities of expressions that hold between loop iterations. We also present preliminaryresults for a simple example.

    Improving data locality by chunking

    Get PDF
    Abstract. Cache memories were invented to decouple fast processors from slow memories. However, this decoupling is only partial, and many researchers have attempted to improve cache use by program optimization. Potential benefits are significant since both energy dissipation and performance highly depend on the traffic between memory levels. But modeling the traffic is difficult; this observation has led to the use of heuristic methods for steering program transformations. In this paper, we propose another approach: we simplify the cache model and we organize the target program in such a way that an asymptotic evaluation of the memory traffic is possible. This information is used by our optimization algorithm in order to find the best reordering of the program operations, at least in an asymptotic sense. Our method optimizes both temporal and spatial locality. It can be applied to any static control program with arbitrary dependences. The optimizer has been partially implemented and applied to non-trivial programs. We present experimental evidence that the amount of cache misses is drastically reduced with corresponding performance improvements.

    Replicated Placements in the Polyhedron Model

    Full text link

    Scalable and Modular Scheduling

    Full text link
    Scheduling a program (i.e. constructing a timetable for the execution of its operations) is one of the most powerful methods for automatic parallelization. A schedule gives a blueprint for constructing a synchronous program, suitable for an ASIC or VLIW processor. However, constructing a schedule entails solving a large linear program. Even if one accept the (experimental) fact that the Simplex is almost always polynomial, the scheduling time is of the order of a large power of the program size. Hence, the method does not scale well. The present paper proposes two methods for improving the situation. Firstly, a big program can be divided in smaller units (processes) which can be scheduled separately. This is modular scheduling Second, one can use projection methods for solving linear programs incrementatly. This is specially efficient if the dependence graph is sparse

    The Polyhedral Model Is More Widely Applicable Than You Think

    Get PDF
    Abstract. The polyhedral model is a powerful framework for automatic optimization and parallelization. It is based on an algebraic representation of programs, allowing to construct and search for complex sequences of optimizations. This model is now mature and reaches production compilers. The main limitation of the polyhedral model is known to be its restriction to statically predictable, loop-based program parts. This paper removes this limitation, allowing to operate on general data-dependent control-flow. We embed control and exit predicates as first-class citizens of the algebraic representation, from program analysis to code generation. Complementing previous (partial) attempts in this direction, our work concentrates on extending the code generation step and does not compromise the expressiveness of the model. We present experimental evidence that our extension is relevant for program optimization and parallelization, showing performance improvements on benchmarks that were thought to be out of reach of the polyhedral model.
    corecore