417 research outputs found

    Tectonic discrimination diagrams revisited

    Get PDF
    The decision boundaries of most tectonic discrimination diagrams are drawn by eye. Discriminant analysis is a statistically more rigorous way to determine the tectonic affinity of oceanic basalts based on their bulk-rock chemistry. This method was applied to a database of 756 oceanic basalts of known tectonic affinity ( ocean island, mid-ocean ridge, or island arc). For each of these training data, up to 45 major, minor, and trace elements were measured. Discriminant analysis assumes multivariate normality. If the same covariance structure is shared by all the classes (i.e., tectonic affinities), the decision boundaries are linear, hence the term linear discriminant analysis (LDA). In contrast with this, quadratic discriminant analysis (QDA) allows the classes to have different covariance structures. To solve the statistical problems associated with the constant-sum constraint of geochemical data, the training data must be transformed to log-ratio space before performing a discriminant analysis. The results can be mapped back to the compositional data space using the inverse log-ratio transformation. An exhaustive exploration of 14,190 possible ternary discrimination diagrams yields the Ti-Si-Sr system as the best linear discrimination diagram and the Na-Nb-Sr system as the best quadratic discrimination diagram. The best linear and quadratic discrimination diagrams using only immobile elements are Ti-V-Sc and Ti-V-Sm, respectively. As little as 5% of the training data are misclassified by these discrimination diagrams. Testing them on a second database of 182 samples that were not part of the training data yields a more reliable estimate of future performance. Although QDA misclassifies fewer training data than LDA, the opposite is generally true for the test data. Therefore LDA is a cruder but more robust classifier than QDA. Another advantage of LDA is that it provides a powerful way to reduce the dimensionality of the multivariate geochemical data in a similar way to principal component analysis. This procedure yields a small number of "discriminant functions,'' which are linear combinations of the original variables that maximize the between-class variance relative to the within-class variance

    On the treatment of discordant detrital zircon Uā€“Pb data

    Get PDF
    Zircon Uā€“Pb geochronology is a staple of crustal evolution studies and sedimentary provenance analysis. Constructing (detrital) Uā€“Pb age spectra is straightforward for concordant 206Pb/238U and 207Pb/206Pb compositions. But unfortunately, many Uā€“Pb datasets contain a significant proportion of discordant analyses. This paper investigates two decisions that must be made when analysing such discordant Uā€“Pb data. First, the analyst must choose whether to use the 206Pb/238U or the 207Pb/206Pb date. The 206Pb/238U method is more precise for young samples, whereas the 207Pb/206Pb method is better suited for old samples. However there is no agreement which ā€œcutoffā€ should be used to switch between the two. This subjective decision can be avoided by using single-grain concordia ages. These represent a kind of weighted mean between the 206Pb/238U and 207Pb/206Pb methods, which offers better precision than either of the latter two methods. A second subjective decision is how to define the discordance cutoff between ā€œgoodā€ and ā€œbadā€ data. Discordance is usually defined as (1) the relative age difference between the 206Pb/238U and 207Pb/206Pb dates. However, this paper shows that several other definitions are possible as well, including (2) the absolute age difference; (3) the common-Pb fraction according to the Staceyā€“Kramers mantle evolution model; (4) the p value of concordance; (5) the perpendicular log ratio (or ā€œAitchisonā€) distance to the concordia line; and (6) the log ratio distance to the maximum likelihood composition on the concordia line. Applying these six discordance filters to a 70ā€‰869-grain dataset of zircon Uā€“Pb compositions reveals that (i) the relative age discordance filter tends to suppress the young age components in Uā€“Pb age spectra, whilst inflating the older age components; (ii) the Staceyā€“Kramers discordance filter is more likely to reject old grains and less likely to reject young ones; (iii) the p-value-based discordance filter has the undesirable effect of biasing the results towards the least precise measurements; (iv) the log-ratio-based discordance filters are strictest for Proterozoic grains and more lenient for Phanerozoic and Archaean age components; (v) of all the methods, the log ratio distance to the concordia composition produces the best results, in the sense that it produces age spectra that most closely match those of the unfiltered data: it sharpens age spectra but does not change their shape. The popular relative age definition fares the worst according to this criterion. All the methods presented in this paper have been implemented in the IsoplotR toolbox for geochronology

    Maximum depositional age estimation revisited

    Get PDF
    In a recent review published in this journal, Coutts et al. (2019) compared nine different ways to estimate the maximum depositional age (MDA) of siliclastic rocks by means of detrital geochronology. Their results show that among these methods three are positively and six negatively biased. This paper investigates the cause of these biases and proposes a solution to it. A simple toy example shows that it is theoretically impossible for the reviewed methods to find the correct depositional age in even a best case scenario: the MDA estimates drift to ever smaller values with increasing sample size. The issue can be solved using a maximum likelihood model that was originally developed for fission track thermochronology by Galbraith and Laslett (1993). This approach parameterises the MDA estimation problem with a binary mixture of discrete and continuous distributions. The ā€˜Maximum Likelihood Ageā€™ (MLA) algorithm converges to a unique MDA value, unlike the ad hoc methods reviewed by Coutts et al. (2019). It successfully recovers the depositional age for the toy example, and produces sensible results for realistic distributions. This is illustrated with an application to a published dataset of 13 sandstone samples that were analysed by both LA-ICPMS and CA-TIMS Uā€“Pb geochronology. The ad hoc algorithms produce unrealistic MDA estimates that are systematically younger for the LA-ICPMS data than for the CA-TIMS data. The MLA algorithm does not suffer from this negative bias. The MLA method is a purely statistical approach to MDA estimation. Like the ad hoc methods, it does not readily accommodate geological complications such as post-depositional Pb-loss, or analytical issues causing erroneously young outliers. The best approach in such complex cases is to re-analyse the youngest grains using more accurate dating techniques. The results of the MLA method are best visualised on radial plots. Both the model and the plots have applications outside detrital geochronology, for example to determine volcanic eruption ages

    Unifying the Uā€“Pb and Thā€“Pb methods: joint isochron regression and common Pb correction

    Get PDF
    The actinide elements U and Th undergo radioactive decay to three isotopes of Pb, forming the basis of three coupled geochronometers. The 206Pbā€‰āˆ•238U and 207Pbā€‰āˆ•235U decay systems are routinely combined to improve accuracy. Joint consideration with the 208Pbā€‰āˆ•232Th decay system is less common. This paper aims to change this. Co-measured 208Pbā€‰āˆ•232Th is particularly useful for discordant samples containing variable amounts of non-radiogenic (ā€œcommonā€) Pb. The paper presents a maximum likelihood algorithm for joint isochron regression of the 206Pbā€‰āˆ•238Pb, 207Pbā€‰āˆ•235Pb and 208Pbā€‰āˆ•232Th chronometers. Given a set of cogenetic samples, this totalāˆ’Pb/Uāˆ’Th algorithm estimates the common Pb composition and concordia intercept age. Uā€“Thā€“Pb data can be visualised on a conventional Wetherill or Teraā€“Wasserburg concordia diagram, or on a 208Pbā€‰āˆ•232Th vs. 206Pbā€‰āˆ•238U plot. Alternatively, the results of the new discordia regression algorithm can also be visualised as a 208Pbcā€‰āˆ•206Pb vs. 238Uā€‰āˆ•206Pb or 208Pbcā€‰āˆ•207Pb vs. 235Uā€‰āˆ•206Pb isochron, where 208Pbc represents the common 208Pb component. In its most general form, the totalāˆ’Pb/Uāˆ’Th algorithm accounts for the uncertainties of all isotopic ratios involved, including the 232Thā€‰āˆ•238U ratio, as well as the systematic uncertainties associated with the decay constants and the 238Uā€‰āˆ•235U ratio. However, numerical stability is greatly improved when the dependency on the 232Thā€‰āˆ•238U-ratio uncertainty is dropped. For detrital minerals, it is generally not safe to assume a shared common Pb composition and concordia intercept age. In this case, the totalāˆ’Pb/Uāˆ’Th regression method must be modified by tying it to a terrestrial Pb evolution model. Thus, also detrital common Pb correction can be formulated in a maximum likelihood sense. The new method was applied to three published datasets, including low Thāˆ•U carbonates, high Thāˆ•U allanites and overdispersed monazites. The carbonate example illustrates how the totalāˆ’Pb/Uāˆ’Th method achieves a more precise common Pb correction than a conventional 207Pb-based approach does. The allanite sample shows the significant gain in both precision and accuracy that is made when the Thā€“Pb decay system is jointly considered with the Uā€“Pb system. Finally, the monazite example is used to illustrate how the totalāˆ’Pb/Uāˆ’Th regression algorithm can be modified to include an overdispersion parameter. All the parameters in the discordia regression method (including the age and the overdispersion parameter) are strictly positive quantities that exhibit skewed error distributions near zero. This skewness can be accounted for using the profile log-likelihood method or by recasting the regression algorithm in terms of logarithmic quantities. Both approaches yield realistic asymmetric confidence intervals for the model parameters. The new algorithm is flexible enough that it can accommodate disequilibrium corrections and intersample error correlations when these are provided by the user. All the methods presented in this paper have been added to the IsoplotR software package. This will hopefully encourage geochronologists to take full advantage of the entire Uā€“Thā€“Pb decay system

    An algorithm for Uā€“Pb geochronology by secondary ion mass spectrometry

    Get PDF
    Secondary ion mass spectrometry (SIMS) is a widely used technique for in situ Uā€“Pb geochronology of accessory minerals. Existing algorithms for SIMS data reduction and error propagation make a number of simplifying assumptions that degrade the precision and accuracy of the resulting Uā€“Pb dates. This paper uses an entirely new approach to SIMS data processing that introduces the following improvements over previous algorithms. First, it treats SIMS measurements as compositional data using log-ratio statistics. This means that, unlike existing algorithms, (a) its isotopic ratio estimates are guaranteed to be strictly positive numbers, (b) identical results are obtained regardless of whether data are processed as normal ratios (e.g. 206Pb /238U) or reciprocal ratios (e.g. 238U /206Pb), and (c) its uncertainty estimates account for the positive skewness of measured isotopic ratio distributions. Second, the new algorithm accounts for the Poisson noise that characterises secondary electron multiplier (SEM) detectors. By fitting the SEM signals using the method of maximum likelihood, it naturally handles low-intensity ion beams, in which zero-count signals are common. Third, the new algorithm casts the data reduction process in a matrix format and thereby captures all sources of systematic uncertainty. These include significant inter-spot error correlations that arise from the Pb /Uā€“UO(2) /U calibration curve. The new algorithm has been implemented in a new software package called simplex. The simplex package was written in R and can be used either online, offline, or from the command line. The programme can handle SIMS data from both Cameca and SHRIMP instruments

    Making geological sense of 'Big Data' in sedimentary provenance

    Get PDF
    Sedimentary provenance studies increasingly apply multiple chemical, mineralogical and isotopic proxies to many samples. The resulting datasets are often so large (containing thousands of numerical values) and complex (comprising multiple dimensions) that it is warranted to use the Internet-era term ā€˜Big Dataā€™ to describe them. This paper introduces Multidimensional Scaling (MDS), Generalised Procrustes Analysis (GPA) and Individual Differences Scaling (INDSCAL, a type of ā€˜3-way MDSā€™ algorithm) as simple yet powerful tools to extract geological insights from ā€˜Big Dataā€™ in a provenance context. Using a dataset from the Namib Sand Sea as a test case, we show how MDS can be used to visualise the similarities and differences between 16 fluvial and aeolian sand samples for five different provenance proxies, resulting in five different ā€˜configurationsā€™. These configurations can be fed into a GPA algorithm, which translates, rotates, scales and reflects them to extract a ā€˜consensus viewā€™ for all the data considered together. Alternatively, the five proxies can be jointly analysed by INDSCAL, which fits the data with not one but two sets of coordinates: the ā€˜group configurationā€™, which strongly resembles the graphical output produced by GPA, and the ā€˜source weightsā€™, which can be used to attach geological meaning to the group configuration. For the Namib study, the three methods paint a detailed and self-consistent picture of a sediment routing system in which sand composition is determined by the combination of provenance and hydraulic sorting effects

    A genetic classification of the tholeiitic and calc-alkaline magma series

    Get PDF
    The concept of the ā€˜magma seriesā€™ and the distinction between alkaline, calc-alkaline and tholeiitic trends has been a cornerstone in igneous petrology since the early 20th century, and encodes fundamental information about the redox state of divergent and convergent plate tectonic settings. We show that the ā€˜Bowen and Fenner trendsā€™ that characterise the calc-alkaline and tholeiitic types of magmatic environments can be approximated by a simple log ratio model based on three coupled exponential decay functions, for Aā€‰=ā€‰Na2Oā€‰+ā€‰K2O, Fā€‰=ā€‰FeOT and Mā€‰=ā€‰MgO, respectively. We use this simple natural law to define a ā€˜Bowen-Fenner Indexā€™ to quantify the degree to which an igneous rock belongs to either magma series. Applying our model to a data compilation of igneous rocks from Iceland and the Cascade Mountains effectively separates these into tholeiitic and calc-alkaline trends. However the simple model fails to capture the distinct dog-leg that characterises the tholeiitic log ratio evolution, which can be attributed to the switch from ferrous to ferric iron-bearing minerals. Parameterising this switch in a two stage magma evolution model results in a more accurate fit to the Icelandic data. The same two stage model can also be fitted in Aā€“Tā€“M space, where ā€˜Tā€™ stands for TiO2. This produces a new way to identify calc-alkaline and tholeiitic rocks that does not require the conversion of FeO and Fe2O3 to FeOT. Our results demonstrate that log ratio analysis provides a natural way to parameterise physical processes that give rise to these magma series

    Short communication: Inverse isochron regression for Reā€“Os, Kā€“Ca and other chronometers

    Get PDF
    Conventional Re-Os isochrons are based on mass spectrometric estimates of 187Re/188Os and 187Os/188Os, which often exhibit strong error correlations that may obscure potentially important geological complexity. Using an approach that is widely accepted in 40Ar/39Ar and U-Pb geochronology, we here show that these error correlations are greatly reduced by applying a simple change of variables, using 187Os as a common denominator. Plotting 188Os/187Os vs. 187Re/187Os produces an "inverse isochron", defining a binary mixing line between an inherited Os component whose 188Os/187Os ratio is given by the vertical intercept, and the radiogenic 187Re/187Os ratio, which corresponds to the horizontal intercept. Inverse isochrons facilitate the identification of outliers and other sources of data dispersion. They can also be applied to other geochronometers such as the K-Ca method and (with less dramatic results) the Rb-Sr, Sm-Nd and Lu-Hf methods. Conventional and inverse isochron ages are similar for precise datasets but may significantly diverge for imprecise ones. A semi-synthetic data simulation indicates that, in the latter case, the inverse isochron age is more accurate. The generalised inverse isochron method has been added to the IsoplotR toolbox for geochronology, which automatically converts conventional isochron ratios into inverse ratios, and vice versa

    An R package for statistical provenance analysis

    Get PDF
    Ā© 2016 Elsevier B.V. This paper introduces provenance, a software package within the statistical programming environment R, which aims to facilitate the visualisation and interpretation of large amounts of sedimentary provenance data, including mineralogical, petrographic, chemical and isotopic provenance proxies, or any combination of these. provenance comprises functions to: (a) calculate the sample size required to achieve a given detection limit; (b) plot distributional data such as detrital zircon U-Pb age spectra as Cumulative Age Distributions (CADs) or adaptive Kernel Density Estimates (KDEs); (c) plot compositional data as pie charts or ternary diagrams; (d) correct the effects of hydraulic sorting on sandstone petrography and heavy mineral composition; (e) assess the settling equivalence of detrital minerals and grain-size dependence of sediment composition; (f) quantify the dissimilarity between distributional data using the Kolmogorov-Smirnov and Sircombe-Hazelton distances, or between compositional data using the Aitchison and Bray-Curtis distances; (e) interpret multi-sample datasets by means of (classical and nonmetric) Multidimensional Scaling (MDS) and Principal Component Analysis (PCA); and (f) simplify the interpretation of multi-method datasets by means of Generalised Procrustes Analysis (GPA) and 3-way MDS. All these tools can be accessed through an intuitive query-based user interface, which does not require knowledge of the R programming language. provenance is free software released under the GPL-2 licence and will be further expanded based on user feedback

    Measuring Sand Dune Migration Rates with COSI-Corr and Landsat: Opportunities and Challenges

    Get PDF
    It has been over a decade since COSI-Corr, the Co-Registration of Optically Sensed Images and Correlation, was first used to produce a raster map of sand dune movement, however, no studies have yet applied it to the full Landsat archive. The orthorectified and geolocated Landsat Level-1 Precision Terrain (L1TP) products offer the opportunity to simplify the COSI-Corr pre-processing steps, allowing an automated workflow to be devised. In the BodĆ©lĆ© Depression, Chad, this automated workflow has calculated average dune speeds of 15.83 m/year and an increase in dune movement of 2.56 m/year Ā±12.58 m/year from 1987 to 2009. However, this increase does not stem from a systematic increase in dune mobility. The fastest 25% of dunes from 1987 to 1998 reduced their speed by 18.16%. The overall increase stems from the acceleration of features previously moving under 13.30 m/year. While successfully applied to the BodĆ©lĆ© Depression, the automated workflow produces highly variable outputs when applied to the Grand Erg Oriental, Algeria. Variations within path/row scene pairings are caused by the use of mobile features, such as dune crests, as ground control points (GCPs). This has the potential to warp Landsat scenes during the L1TP processing, potentially obfuscating dune migration. Two factors appear to be crucial in determining whether a Landsat scene is suitable for COSI-Corr analysis. Firstly, dune mobility must exceed the misregistration criteria. Secondly, GPCs should be located on static features such as bedrock outcrops
    • ā€¦
    corecore