173 research outputs found

    Improved Bounds and Schemes for the Declustering Problem

    Get PDF
    The declustering problem is to allocate given data on parallel working storage devices in such a manner that typical requests find their data evenly distributed on the devices. Using deep results from discrepancy theory, we improve previous work of several authors concerning range queries to higher-dimensional data. We give a declustering scheme with an additive error of Od(logd1M)O_d(\log^{d-1} M) independent of the data size, where dd is the dimension, MM the number of storage devices and d1d-1 does not exceed the smallest prime power in the canonical decomposition of MM into prime powers. In particular, our schemes work for arbitrary MM in dimensions two and three. For general dd, they work for all Md1M\geq d-1 that are powers of two. Concerning lower bounds, we show that a recent proof of a Ωd(logd12M)\Omega_d(\log^{\frac{d-1}{2}} M) bound contains an error. We close the gap in the proof and thus establish the bound.Comment: 19 pages, 1 figur

    Asymptotically optimal declustering schemes for 2-dim range queries

    Get PDF
    AbstractDeclustering techniques have been widely adopted in parallel storage systems (e.g. disk arrays) to speed up bulk retrieval of multidimensional data. A declustering scheme distributes data items among multiple disks, thus enabling parallel data access and reducing query response time. We measure the performance of any declustering scheme as its worst case additive deviation from the ideal scheme. The goal thus is to design declustering schemes with as small an additive error as possible. We describe a number of declustering schemes with additive error O(logM) for 2-dimensional range queries, where M is the number of disks. These are the first results giving O(logM) upper bound for all values of M. Our second result is a lower bound on the additive error. It is known that except for a few stringent cases, additive error of any 2-dimensional declustering scheme is at least one. We strengthen this lower bound to Ω((logM)(d−1/2)) for d-dimensional schemes and to Ω(logM) for 2-dimensional schemes, thus proving that the 2-dimensional schemes described in this paper are (asymptotically) optimal. These results are obtained by establishing a connection to geometric discrepancy. We also present simulation results to evaluate the performance of these schemes in practice

    Topological Properties of Epidemic Aftershock Processes

    Full text link
    Earthquakes in seismological catalogs and acoustic emission events in lab experiments can be statistically described as point events in linear Hawkes processes, where the spatiotemporal rate is a linear superposition of background intensity and aftershock clusters triggered by preceding activity. Traditionally, statistical seismology interpreted these models as the outcome of epidemic branching processes, where one-to-one causal links can be established between mainshocks and aftershocks. Declustering techniques are used to infer the underlying triggering trees and relate their topological properties with epidemic branching models. Here, we review how the standard Epidemic Type Aftershock Sequence (ETAS) model extends from the Galton-Watson branching processes and bridges two extreme cases: Poisson and scale-free power law trees. We report the statistical laws expected in triggering trees regarding some topological properties. We find that the statistics of such topological properties depend exclusively on two parameters of the standard ETAS model: the average branching ratio nb and the ratio between exponents α and b characterizing the production of aftershocks and the distribution of magnitudes, respectively. In particular, the classification of clusters into bursts and swarms proposed by Zaliapin and Ben-Zion (2013b, https://doi.org/10.1002/jgrb.50178) appears naturally in the aftershock sequences of the standard ETAS model depending on nb and α/b. On the other hand swarms can also appear by false causal connections between independent events in nontectonic seismogenic episodes. From these results, one can use the memory-less Galton-Watson as a null model for empirical triggering processes and assess the validity of the ETAS hypothesis to reproduce the statistics of natural and artificial catalogs

    Analysis and Comparison of Replicated Declustering Schemes

    Full text link

    Basophil Activation to Gluten and Non-Gluten Proteins in Wheat-Dependent Exercise-Induced Anaphylaxis

    Get PDF
    Wheat-dependent exercise-induced anaphylaxis (WDEIA) is a cofactor-induced wheat allergy. Gluten proteins, especially ω5-gliadins, are known as major allergens, but partially hydrolyzed wheat proteins (HWPs) also play a role. Our study investigated the link between the molecular composition of gluten or HWP and allergenicity. Saline extracts of gluten (G), gluten with reduced content of ω5-gliadins (G-ω5), slightly treated HWPs (sHWPs), and extensively treated HWPs (eHWPs) were prepared as allergen test solutions and their allergenicity assessed using the skin prick test and basophil activation test (BAT) on twelve patients with WDEIA and ten controls. Complementary sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE), high-performance liquid chromatography (HPLC), and mass spectrometry (MS) analyses revealed that non-gluten proteins, mainly α-amylase/trypsin inhibitors (ATIs), were predominant in the allergen test solutions of G, G-ω5, and sHWPs. Only eHWPs contained gliadins and glutenins as major fraction. All allergen test solutions induced significantly higher %CD63+ basophils/anti-FcεRI ratios in patients compared with controls. BAT using sHWPs yielded 100% sensitivity and 83% specificity at optimal cut-off and may be useful as another tool in WDEIA diagnosis. Our findings indicate that non-gluten proteins carrying yet unidentified allergenic epitopes appear to be relevant in WDEIA. Further research is needed to clarify the role of nutritional ATIs in WDEIA and identify specific mechanisms of immune activation

    Determination of an Ultimate Pit Limit Utilising Fractal Modelling to Optimise NPV

    Get PDF
    The speed and complexity of globalisation and reduction of natural resources on the one hand, and interests of large multinational corporations on the other, necessitates proper management of mineral resources and consumption. The need for scientific research and application of new methodologies and approaches to maximise Net Present Value (NPV) within mining operations is essential. In some cases, drill core logging in the field may result in an inadequate level of information and subsequent poor diagnosis of geological phenomenon which may undermine the delineation or separation of mineralised zones. This is because the interpretation of individual loggers is subjective. However, modelling based on logging data is absolutely essential to determine the architecture of an orebody including ore distribution and geomechanical features. For instance, ore grades, density and RQD values are not included in conventional geological models whilst variations in a mineral deposit are an obvious and salient feature. Given the problems mentioned above, a series of new mathematical methods have been developed, based on fractal modelling, which provide a more objective approach. These have been established and tested in a case study of the Kahang Cu-Mo porphyry deposit, central Iran. Recognition of different types of mineralised zone in an ore deposit is important for mine planning. As a result, it is felt that the most important outcome of this thesis is the development of an innovative approach to the delineation of major mineralised (supergene and hypogene) zones from ‘barren’ host rock. This is based on subsurface data and the utilisation of the Concentration-Volume (C-V) fractal model, proposed by Afzal et al. (2011), to optimise a Cu-Mo block model for better determination of an ultimate pit limit. Drawing on this, new approaches, referred to Density–Volume (D–V) and RQD-Volume (RQD-V) fractal modelling, have been developed and used to delineate rock characteristics in terms of density and RQD within the Kahang deposit (Yasrebi et al., 2013b; Yasrebi et al., 2014). From the results of this modelling, the density and RQD populations of rock types from the studied deposit showed a relationship between density and rock quality based on RQD values, which can be used to predict final pit slope. Finally, the study introduces a Present Value-Volume (PV-V) fractal model in order to identify an accurate excavation orientation with respect to economic principals and ore grades of all determined voxels within the obtained ultimate pit limit in order to achieve an earlier pay-back period.Institute of Materials, Minerals and Mining, the global network IOM3Cornish Institute of EngineersWhittle Consulting (Business Optimisation for the Mining Industry

    Discovery, isolation and structural characterization of cyclotides from Viola sumatrana miq

    Get PDF
    Cyclotides are cyclic peptides from plants in the Violaceae, Rubiaceae, Fabaceae, Cucurbitaceae, and Solanaceae families. They are sparsely distributed in most of these families, but appear to be ubiquitous in the Violaceae, having been found in every plant so far screened from this family. However, not all geographic regions have been examined and here we report the discovery of cyclotides from a Viola species from South-East Asia. Two novel cyclotides (Visu 1 and Visu 2) and two known cyclotides (kalata S and kalata B1) were identified in V. sumatrana. NMR studies revealed that kalata S and kalata B1 had similar secondary structures. Their biological activities were determined in cytotoxicity assays; both had similar cytotoxic activity and were more toxic to U87 cells compared with other cell lines. Overall, the study strongly supports the ubiquity of cyclotides in the Violaceae and adds to our understanding of their distribution and cytotoxic activity

    Enhance South deep variography by including flat inclined boreholes in the local direct estimation methodology

    Get PDF
    This report is submitted to the faculty of Engineering and Built environment, University of Witwatersrand, Johannesburg. In partial fulfillment of the requirements for the degree of Master of Science in Mining Engineering. 25 -05- 2018The research project presented relates to the Mineral Resource evaluation of South Deep Gold Mine in Westonaria, South Africa. The aim of the project is to establish the impact of the inclusion of the samples from flatly inclined boreholes (FIBs) in the variography and Mineral Resource estimation of the individual Elsburg top conglomerate reef (ECT). The samples from FIB boreholes are traditionally excluded from the estimation process to reduce the possibility of smearing grade as stated in the Mine’s Code of Practice. These are boreholes with a dip of greater than -55° or less than 55o. These boreholes provide the highest resolution into the orebody and thus the highest level of de-risking of the orebody and are therefore used for geological modelling. Although the addition of the samples from FIBs in brings a substantial increase in the number of samples in some geostatistical domains they do not introduce outliers. Adding the FIBs resulted in improved variogram models. Simple Kriging models considered are one using the Au (g/t) samples from the steeply inclined holes only and the other using the combined dataset. These Kriging models were post-processed through Local Direct Conditioning (LDC) and the results were compared. Reconciliation indicates that the model remains stable with 1% change at Mineral Resource and Mineral Reserve cut-off of 3.2g/t Au following the addition of Au (g/t) samples from FIBs in the mineral resource estimation. It is therefore concluded that adding the flatly inclined boreholes in the mineral resource estimation increases the confidence in Kriging and improves variogram modelsMT 201

    Identifying a new particle with jet substructures

    Get PDF
    We investigate a potential of measuring properties of a heavy resonance X, exploiting jet substructure techniques. Motivated by heavy higgs boson searches, we focus on the decays of X into a pair of (massive) electroweak gauge bosons. More specifically, we consider a hadronic Z boson, which makes it possible to determine properties of X at an earlier stage. For mXm_X of O(1) TeV, two quarks from a Z boson would be captured as a "merged jet" in a significant fraction of events. The use of the merged jet enables us to consider a Z-induced jet as a reconstructed object without any combinatorial ambiguity. We apply a conventional jet substructure method to extract four-momenta of subjets from a merged jet. We find that jet substructure procedures may enhance features in some kinematic observables formed with subjets. Subjet momenta are fed into the matrix element associated with a given hypothesis on the nature of X, which is further processed to construct a matrix element method (MEM)-based observable. For both moderately and highly boosted Z bosons, we demonstrate that the MEM with current jet substructure techniques can be a very powerful discriminator in identifying the physics nature of X. We also discuss effects from choosing different jet sizes for merged jets and jet-grooming parameters upon the MEM analyses.Comment: 36 pages, 11 figures, published in JHE

    Discrepancy of arithmetic structures

    Get PDF
    In discrepancy theory, we investigate how well a desired aim can be achieved. So typically we do not compare our solution with an optimal solution, but rather with an (idealized) aim. For example, in the declustering problem, we try to distribute data on parallel disks in such a way that all of a prespecified set of requests find their data evenly distributed on the disks. Hence our (idealized) aim is that each request asks for the same amount of data from each disk. Structural results tell us to which extent this is possible. They determine the discrepancy, the deviation of an optimal solution from our aim. Algorithmic results provide good declustering scheme. We show that for grid structure data and rectangle queries, a discrepancy of order (log M)^((d-1)/2) cannot be avoided. Moreover, we present a declustering scheme with a discrepancy of order (log M)^(d-1). Furthermore, we present discrepancy results for hypergraphs related to the hypergraph of arithmetic progressions, for the hypergraph of linear hyperplanes in finite vector spaces and for products of hypergraphs
    corecore