141,467 research outputs found

    Grain boundary partitioning of Ar and He

    Get PDF
    An experimental procedure has been developed that permits measurement of the partitioning of Ar and He between crystal interiors and the intergranular medium (ITM) that surrounds them in synthetic melt-free polycrystalline diopside aggregates. ^(37)Ar and ^(4)He are introduced into the samples via neutron irradiation. As samples are crystallized under sub-solidus conditions from a pure diopside glass in a piston cylinder apparatus, noble gases diffusively equilibrate between the evolving crystal and intergranular reservoirs. After equilibration, ITM Ar and He is distinguished from that incorporated within the crystals by means of step heating analysis. An apparent equilibrium state (i.e., constant partitioning) is reached after about 20 h in the 1450 °C experiments. Data for longer durations show a systematic trend of decreasing ITM Ar (and He) with decreasing grain boundary (GB) interfacial area as would be predicted for partitioning controlled by the network of planar grain boundaries (as opposed to ITM gases distributed in discrete micro-bubbles or melt). These data yield values of GB-area-normalized partitioning, K¯^(Ar)_(ITM), with units of (Ar/m^3 of solid)/(Ar/m^2 of GB) of 6.8 x 10^3 – 2.4 x 104 m^(-1). Combined petrographic microscope, SEM, and limited TEM observation showed no evidence that a residual glass phase or grain boundary micro-bubbles dominated the ITM, though they may represent minor components. If a nominal GB thickness (δ) is assumed, and if the density of crystals and the grain boundaries are assumed equal, then a true grain boundary partition coefficient (K^(Ar)_(GB) = X^(Ar)_(crystals)/X^(Ar)_(GB) may be determined. For reasonable values of δ, K^(Ar)_(GB) is at least an order of magnitude lower than the Ar partition coefficient between diopside and melt. Helium partitioning data provide a less robust constraint with K¯^(He)_(ITM) between 4 x 10^3 and 4 x 10^4 cm^(-1), similar to the Ar partitioning data. These data suggest that an ITM consisting of nominally melt free, bubble free, tight grain boundaries can constitute a significant but not infinite reservoir, and therefore bulk transport pathway, for noble gases in fine grained portions of the crust and mantle where aqueous or melt fluids are non-wetting and of very low abundance (i.e., <0.1% fluid). Heterogeneities in grain size within dry equilibrated systems will correspond to significant differences in bulk rock noble gas content

    Streaming Robust Submodular Maximization: A Partitioned Thresholding Approach

    Get PDF
    We study the classical problem of maximizing a monotone submodular function subject to a cardinality constraint k, with two additional twists: (i) elements arrive in a streaming fashion, and (ii) m items from the algorithm's memory are removed after the stream is finished. We develop a robust submodular algorithm STAR-T. It is based on a novel partitioning structure and an exponentially decreasing thresholding rule. STAR-T makes one pass over the data and retains a short but robust summary. We show that after the removal of any m elements from the obtained summary, a simple greedy algorithm STAR-T-GREEDY that runs on the remaining elements achieves a constant-factor approximation guarantee. In two different data summarization tasks, we demonstrate that it matches or outperforms existing greedy and streaming methods, even if they are allowed the benefit of knowing the removed subset in advance.Comment: To appear in NIPS 201

    Improved model identification for non-linear systems using a random subsampling and multifold modelling (RSMM) approach

    Get PDF
    In non-linear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of 'hold-out' or 'split-sample' data partitioning method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. First, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect significant model terms and identify a common model structure that fits all the K datasets using a new proposed common model selection approach, called the multiple orthogonal search algorithm. Finally, estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance

    Improved model identification for nonlinear systems using a random subsampling and multifold modelling (RSMM) approach

    Get PDF
    In nonlinear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of ‘hold-out’ or ‘split-sample’ data partitioning method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. Firstly, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect significant model terms and identify a common model structure that fits all the K datasets using a new proposed common model selection approach, called the multiple orthogonal search algorithm. Finally, estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance

    FPGA-based data partitioning

    Get PDF
    Implementing parallel operators in multi-core machines often involves a data partitioning step that divides the data into cache-size blocks and arranges them so to allow concurrent threads to process them in parallel. Data partitioning is expensive, in some cases up to 90% of the cost of, e.g., a parallel hash join. In this paper we explore the use of an FPGA to accelerate data partitioning. We do so in the context of new hybrid architectures where the FPGA is located as a co-processor residing on a socket and with coherent access to the same memory as the CPU residing on the other socket. Such an architecture reduces data transfer overheads between the CPU and the FPGA, enabling hybrid operator execution where the partitioning happens on the FPGA and the build and probe phases of a join happen on the CPU. Our experiments demonstrate that FPGA-based partitioning is significantly faster and more robust than CPU-based partitioning. The results open interesting options as FPGAs are gradually integrated tighter with the CPU

    Biomass distribution among tropical tree species grown under\ud differing regional climates

    Get PDF
    In the Neotropics, there is a growing interest in establishing plantations of native tree species for commerce, local consumption, and to replant on abandoned agricultural lands. Although numerous trial plantations have been established, comparative information on the performance of native trees under different regional environments is generally lacking. In this study, we evaluated the accumulation and partitioning of above-ground biomass in 16 native and two exotic tree species growing in replicated species selection trials in Panama under humid and dry regional environments. Seven of the 18 species accumulated greater total biomass at the humid site than at the dry site over a two-year period. Species specific biomass partitioning among leaves, branches and trunks was observed. However, awide range of total biomass found among species (from 1.06 kg for Dipteryx panamensis to 29.84 kg for Acacia mangium at Soberania) justified the used of an Aitchison log ratio transformation to adjust for size. When biomass partitioning was adjusted for size, a majority of these differences proved to be a result of the ability of the tree to support biomass components rather than the result of differences in the regional environments at the two sites. These findings were confirmed by comparative ANCOVAs on Aitchison-transformed and non-Aitchison-transformed variables. In these comparisons, basal diameter, height and diameter at breast height were robust predictors of biomass for the pooled data from both sites, but Aitchison-transformed\ud variables had little predictive power

    An error resilience method for depth in stereoscopic 3D video

    Get PDF
    Error resilience stereoscopic 3D video can ensure robust 3D video communication especially in high error rate wireless channel. In this paper, an error resilience method is proposed for the depth data of the stereoscopic 3D video using data partitioning. Although data partitioning method is available for 2D video, its extension to depth information has not been investigated in the context of stereoscopic 3D video. Simulation results show that the depth data is less sensitive to error and should be partitioned towards the end of the data partitions block. The partitioned depth data is then applied to an error resilience method namely multiple description coding (MDC) to code the 2D video and the depth information. Simulation results show improved performance using the proposed depth partitioning on MDC compared to the original MDC in an error prone environment
    corecore