213 research outputs found

    Deep geothermal energy potential in Scotland

    Get PDF
    Geothermal energy is simply the natural heat that exists within our planet. In some parts of the world the existence of a geothermal energy resource is made obvious by the presence of hot springs, and such resources have been exploited in various ways for millennia. More usually, there is no direct evidence at Earth‘s surface of the vast reservoir of stored heat below, and geothermal energy has remained largely ignored and untapped in most parts of the world. Now, its potential as a renewable source of energy is being recognised increasingly, and technologies and concepts for exploiting it are developing rapidly along two lines: low enthalpy (low temperature) resources, which exploit warm water in the shallow subsurface to provide heat either directly (as warm water) or indirectly (via heat exchange systems); and high enthalpy (high temperature) resources, which yield hot water, usually from deeper levels, that can be used to generate electricity. The potential for harnessing electricity from geothermal energy has long been recognised; the potentially substantial reserves, minimal environmental impact, and capacity to contribute continuously to base load electricity supply make it an extremely attractive prospect. The ongoing drive to develop renewable sources of energy, coupled with anticipated technological developments that will in future reduce the depth at which heat reservoirs are considered economically viable, means there is now a pressing need to know more about the deep geothermal energy potential in Scotland. This report contains the British Geological Survey (BGS) contribution to a collaborative project between AECOM and BGS to produce a qualitative assessment of deep geothermal energy potential in onshore Scotland for the Scottish Government. BGS‘s role is to provide the Stage One deliverable ―Identifying and assessing geothermal energy potential‖, comprising an assessment of areas in Scotland most likely to hold deep geothermal resources based on existing geological and geothermal data sets. The report is divided into two parts. Part 1 sets out the background to geothermal energy, describes the geological context, and presents an analysis of the size and accessibility of the heat resource in Scotland based on existing geothermal data. The potential for exploiting deep geothermal energy in three settings in inshore areas of Scotland (abandoned mine workings, Hot Sedimentary Aquifers, and Hot Dry Rocks) is examined in Part 2

    Using the past to constrain the future: how the palaeorecord can improve estimates of global warming

    Full text link
    Climate sensitivity is defined as the change in global mean equilibrium temperature after a doubling of atmospheric CO2 concentration and provides a simple measure of global warming. An early estimate of climate sensitivity, 1.5-4.5{\deg}C, has changed little subsequently, including the latest assessment by the Intergovernmental Panel on Climate Change. The persistence of such large uncertainties in this simple measure casts doubt on our understanding of the mechanisms of climate change and our ability to predict the response of the climate system to future perturbations. This has motivated continued attempts to constrain the range with climate data, alone or in conjunction with models. The majority of studies use data from the instrumental period (post-1850) but recent work has made use of information about the large climate changes experienced in the geological past. In this review, we first outline approaches that estimate climate sensitivity using instrumental climate observations and then summarise attempts to use the record of climate change on geological timescales. We examine the limitations of these studies and suggest ways in which the power of the palaeoclimate record could be better used to reduce uncertainties in our predictions of climate sensitivity.Comment: The final, definitive version of this paper has been published in Progress in Physical Geography, 31(5), 2007 by SAGE Publications Ltd, All rights reserved. \c{opyright} 2007 Edwards, Crucifix and Harriso

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Psychological stress and other potential triggers for recurrences of herpes simplex virus eye infections

    Get PDF
    Objective To assess psychological stress and other factors as possible triggers of ocular herpes simplex virus (HSV) recurrences. Design A prospective cohort study nested in a randomized, placebo-controlled, clinical trial. Setting Fifty-eight community-based or university sites. Participants Immunocompetent adults (N = 308), aged 18 years or older, with a documented history of ocular HSV disease in the prior year and observed for up to 15 months. Exposure Variables Psychological stress, systemic infection, sunlight exposure, menstrual period, contact lens wear, and eye injury recorded on a weekly log. The exposure period was considered to be the week before symptomatic onset of a recurrence. Main Outcome Measure The first documented recurrence of ocular HSV disease, with exclusion of cases in which the exposure week log was completed late after the onset of symptoms. Results Thirty-three participants experienced a study outcome meeting these criteria. Higher levels of psychological stress were not associated with an increased risk of recurrence (rate ratio, 0.58; 95% confidence interval, 0.32-1.05; P = .07). No association was found between any of the other exposure variables and recurrence. When an analysis was performed including only the recurrences (n = 26) for which the exposure week log was completed late and after symptom onset, there was a clear indication of retrospective overreporting of high stress (P = .03) and systemic infection (P = .01). Not excluding these cases could have produced incorrect conclusions due to recall bias. Conclusions Psychological stress does not appear to be a trigger of recurrences of ocular HSV disease. If not accounted for, recall bias can substantially overestimate the importance of factors that do not have a causal association with HSV infection

    The Convex Geometry of Linear Inverse Problems

    Get PDF
    In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming

    Body appreciation around the world: Measurement invariance of the Body Appreciation Scale-2 (BAS-2) across 65 nations, 40 languages, gender identities, and age.

    Get PDF
    The Body Appreciation Scale-2 (BAS-2) is a widely used measure of a core facet of the positive body image construct. However, extant research concerning measurement invariance of the BAS-2 across a large number of nations remains limited. Here, we utilised the Body Image in Nature (BINS) dataset - with data collected between 2020 and 2022 - to assess measurement invariance of the BAS-2 across 65 nations, 40 languages, gender identities, and age groups. Multi-group confirmatory factor analysis indicated that full scalar invariance was upheld across all nations, languages, gender identities, and age groups, suggesting that the unidimensional BAS-2 model has widespread applicability. There were large differences across nations and languages in latent body appreciation, while differences across gender identities and age groups were negligible-to-small. Additionally, greater body appreciation was significantly associated with higher life satisfaction, being single (versus being married or in a committed relationship), and greater rurality (versus urbanicity). Across a subset of nations where nation-level data were available, greater body appreciation was also significantly associated with greater cultural distance from the United States and greater relative income inequality. These findings suggest that the BAS-2 likely captures a near-universal conceptualisation of the body appreciation construct, which should facilitate further cross-cultural research. [Abstract copyright: Copyright © 2023 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Search for the Zγ decay mode of new high-mass resonances in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This letter presents a search for narrow, high-mass resonances in the Zγ final state with the Z boson decaying into a pair of electrons or muons. The √s = 13 TeV pp collision data were recorded by the ATLAS detector at the CERN Large Hadron Collider and have an integrated luminosity of 140 fb−1. The data are found to be in agreement with the Standard Model background expectation. Upper limits are set on the resonance production cross section times the decay branching ratio into Zγ. For spin-0 resonances produced via gluon–gluon fusion, the observed limits at 95% confidence level vary between 65.5 fb and 0.6 fb, while for spin-2 resonances produced via gluon–gluon fusion (or quark–antiquark initial states) limits vary between 77.4 (76.1) fb and 0.6 (0.5) fb, for the mass range from 220 GeV to 3400 GeV

    Inclusive-photon production and its dependence on photon isolation in pp collisions at s√ = 13 TeV using 139 fb−1 of ATLAS data

    Get PDF
    Measurements of differential cross sections are presented for inclusive isolated-photon production in pp collisions at a centre-of-mass energy of 13 TeV provided by the LHC and using 139 fb−1 of data recorded by the ATLAS experiment. The cross sections are measured as functions of the photon transverse energy in different regions of photon pseudorapidity. The photons are required to be isolated by means of a fixed-cone method with two different cone radii. The dependence of the inclusive-photon production on the photon isolation is investigated by measuring the fiducial cross sections as functions of the isolation-cone radius and the ratios of the differential cross sections with different radii in different regions of photon pseudorapidity. The results presented in this paper constitute an improvement with respect to those published by ATLAS earlier: the measurements are provided for different isolation radii and with a more granular segmentation in photon pseudorapidity that can be exploited in improving the determination of the proton parton distribution functions. These improvements provide a more in-depth test of the theoretical predictions. Next-to-leading-order QCD predictions from JETPHOX and SHERPA and next-to-next-to-leading-order QCD predictions from NNLOJET are compared to the measurements, using several parameterisations of the proton parton distribution functions. The measured cross sections are well described by the fixed-order QCD predictions within the experimental and theoretical uncertainties in most of the investigated phase-space region

    Search for heavy Higgs bosons with flavour-violating couplings in multi-lepton plus b-jets final states in pp collisions at 13 TeV with the ATLAS detector

    Get PDF
    A search for new heavy scalars with flavour-violating decays in final states with multiple leptons and b-tagged jets is presented. The results are interpreted in terms of a general two-Higgs-doublet model involving an additional scalar with couplings to the top-quark and the three up-type quarks (ρtt, ρtc, and ρtu). The targeted signals lead to final states with either a same-sign top-quark pair, three top-quarks, or four top-quarks. The search is based on a data sample of proton-proton collisions at √s = 13 TeV recorded with the ATLAS detector during Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of 139 fb−1. Events are categorised depending on the multiplicity of light charged leptons (electrons or muons), total lepton charge, and a deep-neural-network output to enhance the purity of each of the signals. Masses of an additional scalar boson mH between 200 − 630 GeV with couplings ρtt = 0.4, ρtc = 0.2, and ρtu = 0.2 are excluded at 95% confidence level. Additional interpretations are provided in models of R-parity violating supersymmetry, motivated by the recent flavour and (g − 2)μ anomalies
    corecore