248 research outputs found

    The Budding of fruit trees

    Get PDF
    A LTHOUGH the principles of budding and grafting are basically the same—that is, t h e success depends upon the close contact of the cambium layers of both stock and scion—the operations are performed at different times of the year

    Grafting fruit trees

    Get PDF
    Grafting is a process whereby a portion of one tree is joined to a rooted portion of another and subsequently unites through the mutual response of both stock and scion. Although it is possible to successfully graft and rework some kinds of fruit trees on to others a more successful take is assured if the botanical relationship is very close, i.e., peaches grafted on peaches, apples grafted on apples, etc

    Thinning of deciduous fruits

    Get PDF
    In the absence of any artificial control of the crop, most varieties of fruit trees will set much more fruit than it is possible for the trees to bring to marketable size and this is particularly the case with stone fruits. Even under good cultural and weather conditions the trees are often unable to bring the fruit to satisfactory size unless thinning is practised

    Solo-Fast Universal Constructions for Deterministic Abortable Objects

    Full text link

    Constraining Absolute Plate Motions Since the Triassic

    Get PDF
    The absolute motion of tectonic plates since Pangea can be derived from observations of hotspot trails, paleomagnetism, or seismic tomography. However, fitting observations is typically carried out in isolation without consideration for the fit to unused data or whether the resulting plate motions are geodynamically plausible. Through the joint evaluation of global hotspot track observations (for times <80 Ma), first‐order estimates of net lithospheric rotation (NLR), and parameter estimation for paleo–trench migration (TM), we present a suite of geodynamically consistent, data‐optimized global absolute reference frames from 220 Ma to the present. Each absolute plate motion (APM) model was evaluated against six published APM models, together incorporating the full range of primary data constraints. Model performance for published and new models was quantified through a standard statistical analyses using three key diagnostic global metrics: root‐mean square plate velocities, NLR characteristics, and TM behavior. Additionally, models were assessed for consistency with published global paleomagnetic data and for ages <80 Ma for predicted relative hotspot motion, track geometry, and time dependence. Optimized APM models demonstrated significantly improved global fit with geological and geophysical observations while performing consistently with geodynamic constraints. Critically, APM models derived by limiting average rates of NLR to ~0.05°/Myr and absolute TM velocities to ~27‐mm/year fit geological observations including hotspot tracks. This suggests that this range of NLR and TM estimates may be appropriate for Earth over the last 220 Myr, providing a key step toward the practical integration of numerical geodynamics into plate tectonic reconstructions

    Parallelizing Deadlock Resolution in Symbolic Synthesis of Distributed Programs

    Full text link
    Previous work has shown that there are two major complexity barriers in the synthesis of fault-tolerant distributed programs: (1) generation of fault-span, the set of states reachable in the presence of faults, and (2) resolving deadlock states, from where the program has no outgoing transitions. Of these, the former closely resembles with model checking and, hence, techniques for efficient verification are directly applicable to it. Hence, we focus on expediting the latter with the use of multi-core technology. We present two approaches for parallelization by considering different design choices. The first approach is based on the computation of equivalence classes of program transitions (called group computation) that are needed due to the issue of distribution (i.e., inability of processes to atomically read and write all program variables). We show that in most cases the speedup of this approach is close to the ideal speedup and in some cases it is superlinear. The second approach uses traditional technique of partitioning deadlock states among multiple threads. However, our experiments show that the speedup for this approach is small. Consequently, our analysis demonstrates that a simple approach of parallelizing the group computation is likely to be the effective method for using multi-core computing in the context of deadlock resolution

    Locking Fast

    Get PDF
    International audienceThis article presents several independent optimizations of operations on monitors. They do not involve the low-level mutual exclusion mechanisms but rather their integration with and usage within higher-level constructs of the lan- guage. The paper reports acceleration of Hop, the Web pro- gramming language for which these optimizations have been created. The paper shows that other languages such as C and Java would also benefit from these optimizations.Cet article présente trois techniques pour utiliser plus efficacement les moniteurs dans les langages de programmation

    Why High-Performance Modelling and Simulation for Big Data Applications Matters

    Get PDF
    Modelling and Simulation (M&S) offer adequate abstractions to manage the complexity of analysing big data in scientific and engineering domains. Unfortunately, big data problems are often not easily amenable to efficient and effective use of High Performance Computing (HPC) facilities and technologies. Furthermore, M&S communities typically lack the detailed expertise required to exploit the full potential of HPC solutions while HPC specialists may not be fully aware of specific modelling and simulation requirements and applications. The COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications has created a strategic framework to foster interaction between M&S experts from various application domains on the one hand and HPC experts on the other hand to develop effective solutions for big data applications. One of the tangible outcomes of the COST Action is a collection of case studies from various computing domains. Each case study brought together both HPC and M&S experts, giving witness of the effective cross-pollination facilitated by the COST Action. In this introductory article we argue why joining forces between M&S and HPC communities is both timely in the big data era and crucial for success in many application domains. Moreover, we provide an overview on the state of the art in the various research areas concerned

    Assessing Historical Fish Community Composition Using Surveys, Historical Collection Data, and Species Distribution Models

    Get PDF
    Accurate establishment of baseline conditions is critical to successful management and habitat restoration. We demonstrate the ability to robustly estimate historical fish community composition and assess the current status of the urbanized Barton Creek watershed in central Texas, U.S.A. Fish species were surveyed in 2008 and the resulting data compared to three sources of fish occurrence information: (i) historical records from a museum specimen database and literature searches; (ii) a nearly identical survey conducted 15 years earlier; and (iii) a modeled historical community constructed with species distribution models (SDMs). This holistic approach, and especially the application of SDMs, allowed us to discover that the fish community in Barton Creek was more diverse than the historical data and survey methods alone indicated. Sixteen native species with high modeled probability of occurrence within the watershed were not found in the 2008 survey, seven of these were not found in either survey or in any of the historical collection records. Our approach allowed us to more rigorously establish the true baseline for the pre-development fish fauna and then to more accurately assess trends and develop hypotheses regarding factors driving current fish community composition to better inform management decisions and future restoration efforts. Smaller, urbanized freshwater systems, like Barton Creek, typically have a relatively poor historical biodiversity inventory coupled with long histories of alteration, and thus there is a propensity for land managers and researchers to apply inaccurate baseline standards. Our methods provide a way around that limitation by using SDMs derived from larger and richer biodiversity databases of a broader geographic scope. Broadly applied, we propose that this technique has potential to overcome limitations of popular bioassessment metrics (e.g., IBI) to become a versatile and robust management tool for determining status of freshwater biotic communities
    • 

    corecore