7,777 research outputs found

    The thermal history of the Western Irish onshore

    Get PDF
    We present here a low-temperature thermochronological study that combines the apatite fission-track and (U + Th)/He dating methods with a pseudo-vertical sampling approach to generate continuous and well-constrained temperature–time histories from the onshore Irish Atlantic margin. The apatite fission-track and (U + Th)/He ages range from the Late Jurassic to Early Cretaceous and the mean track lengths are relatively short. Thermal histories derived from inverse modelling show that following post-orogenic exhumation the sample profiles cooled to c. 75 °C. A rapid cooling event to surface temperatures occurred during the Late Jurassic to Early Cretaceous and was diachronous from north to south. It was most probably caused by c. 2.5 km of rift-shoulder related exhumation and can be temporally linked to the main stage of Mesozoic rifting in the offshore basins. A slow phase of reheating during the Late Cretaceous and Early Cenozoic is attributed to the deposition of a thick sedimentary sequence that resulted in c. 1.5 km of burial. Our data imply a final pulse of exhumation in Neogene times, probably related to compression of the margin. However, it is possible that an Early Cenozoic cooling event, compatible with our data but not seen in our inverse models, accounts for part of the Cenozoic exhumation

    Computational complexity and memory usage for multi-frontal direct solvers in structured mesh finite elements

    Full text link
    The multi-frontal direct solver is the state-of-the-art algorithm for the direct solution of sparse linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from B-spline-based isogeometric finite elements, where the mesh is a structured grid. Specifically we provide the estimates for systems resulting from Cp−1C^{p-1} polynomial B-spline spaces and compare them to those obtained using C0C^0 spaces.Comment: 8 pages, 2 figure

    The cost of continuity: performance of iterative solvers on isogeometric finite elements

    Full text link
    In this paper we study how the use of a more continuous set of basis functions affects the cost of solving systems of linear equations resulting from a discretized Galerkin weak form. Specifically, we compare performance of linear solvers when discretizing using C0C^0 B-splines, which span traditional finite element spaces, and Cp−1C^{p-1} B-splines, which represent maximum continuity. We provide theoretical estimates for the increase in cost of the matrix-vector product as well as for the construction and application of black-box preconditioners. We accompany these estimates with numerical results and study their sensitivity to various grid parameters such as element size hh and polynomial order of approximation pp. Finally, we present timing results for a range of preconditioning options for the Laplace problem. We conclude that the matrix-vector product operation is at most \slfrac{33p^2}{8} times more expensive for the more continuous space, although for moderately low pp, this number is significantly reduced. Moreover, if static condensation is not employed, this number further reduces to at most a value of 8, even for high pp. Preconditioning options can be up to p3p^3 times more expensive to setup, although this difference significantly decreases for some popular preconditioners such as Incomplete LU factorization

    Preprocessing Solar Images while Preserving their Latent Structure

    Get PDF
    Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory, a NASA satellite, collect massive streams of high resolution images of the Sun through multiple wavelength filters. Reconstructing pixel-by-pixel thermal properties based on these images can be framed as an ill-posed inverse problem with Poisson noise, but this reconstruction is computationally expensive and there is disagreement among researchers about what regularization or prior assumptions are most appropriate. This article presents an image segmentation framework for preprocessing such images in order to reduce the data volume while preserving as much thermal information as possible for later downstream analyses. The resulting segmented images reflect thermal properties but do not depend on solving the ill-posed inverse problem. This allows users to avoid the Poisson inverse problem altogether or to tackle it on each of ∼\sim10 segments rather than on each of ∼\sim107^7 pixels, reducing computing time by a factor of ∼\sim106^6. We employ a parametric class of dissimilarities that can be expressed as cosine dissimilarity functions or Hellinger distances between nonlinearly transformed vectors of multi-passband observations in each pixel. We develop a decision theoretic framework for choosing the dissimilarity that minimizes the expected loss that arises when estimating identifiable thermal properties based on segmented images rather than on a pixel-by-pixel basis. We also examine the efficacy of different dissimilarities for recovering clusters in the underlying thermal properties. The expected losses are computed under scientifically motivated prior distributions. Two simulation studies guide our choices of dissimilarity function. We illustrate our method by segmenting images of a coronal hole observed on 26 February 2015

    Measuring plume-related exhumation of the British Isles in Early Cenozoic times

    Get PDF
    Mantle plumes have been proposed to exert a first-order control on the morphology of Earth's surface. However, there is little consensus on the lifespan of the convectively supported topography. Here, we focus on the Cenozoic uplift and exhumation history of the British Isles. While uplift in the absence of major regional tectonic activity has long been documented, the causative mechanism is highly controversial, and direct exhumation estimates are hindered by the near-complete absence of onshore post-Cretaceous sediments (outside Northern Ireland) and the truncated stratigraphic record of many offshore basins. Two main hypotheses have been developed by previous studies: epeirogenic exhumation driven by the proto-Iceland plume, or multiple phases of Cenozoic compression driven by far-field stresses. Here, we present a new thermochronological dataset comprising 43 apatite fission track (AFT) and 102 (U–Th–Sm)/He (AHe) dates from the onshore British Isles. Inverse modelling of vertical sample profiles allows us to define well-constrained regional cooling histories. Crucially, during the Paleocene, the thermal history models show that a rapid exhumation pulse (1–2.5 km) occurred, focused on the Irish Sea. Exhumation is greatest in the north of the Irish Sea region, and decreases in intensity to the south and west. The spatial pattern of Paleocene exhumation is in agreement with the extent of magmatic underplating inferred from geophysical studies, and the timing of uplift and exhumation is synchronous with emplacement of the plume-related British and Irish Paleogene Igneous Province (BIPIP). Prior to the Paleocene exhumation pulse, the Mesozoic onshore exhumation pulse is mainly linked to the uplift and erosion of the hinterland during the complex and long-lived rifting history of the neighbouring offshore basins. The extent of Neogene exhumation is difficult to constrain due to the poor sensitivity of the AHe and AFT systems at low temperatures. We conclude that the Cenozoic topographic evolution of the British Isles is the result of plume-driven uplift and exhumation, with inversion under compressive stress playing a secondary role

    Detecting Unspecified Structure in Low-Count Images

    Full text link
    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tail probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet

    Multi-physics ensemble snow modelling in the western Himalaya

    Get PDF
    Combining multiple data sources with multi-physics simulation frameworks offers new potential to extend snow model inter-comparison efforts to the Himalaya. As such, this study evaluates the sensitivity of simulated regional snow cover and runoff dynamics to different snowpack process representations. The evaluation is based on a spatially distributed version of the Factorial Snowpack Model (FSM) set up for the Astore catchment in the upper Indus basin. The FSM multi-physics model was driven by climate fields from the High Asia Refined Analysis (HAR) dynamical downscaling product. Ensemble performance was evaluated primarily using MODIS remote sensing of snow-covered area, albedo and land surface temperature. In line with previous snow model inter-comparisons, no single FSM configuration performs best in all of the years simulated. However, the results demonstrate that performance variation in this case is at least partly related to inaccuracies in the sequencing of inter-annual variation in HAR climate inputs, not just FSM model limitations. Ensemble spread is dominated by interactions between parameterisations of albedo, snowpack hydrology and atmospheric stability effects on turbulent heat fluxes. The resulting ensemble structure is similar in different years, which leads to systematic divergence in ablation and mass balance at high elevations. While ensemble spread and errors are notably lower when viewed as anomalies, FSM configurations show important differences in their absolute sensitivity to climate variation. Comparison with observations suggests that a subset of the ensemble should be retained for climate change projections, namely those members including prognostic albedo and liquid water retention, refreezing and drainage processes
    • …
    corecore