247 research outputs found

    Deep geothermal energy potential in Scotland

    Get PDF
    Geothermal energy is simply the natural heat that exists within our planet. In some parts of the world the existence of a geothermal energy resource is made obvious by the presence of hot springs, and such resources have been exploited in various ways for millennia. More usually, there is no direct evidence at Earth‘s surface of the vast reservoir of stored heat below, and geothermal energy has remained largely ignored and untapped in most parts of the world. Now, its potential as a renewable source of energy is being recognised increasingly, and technologies and concepts for exploiting it are developing rapidly along two lines: low enthalpy (low temperature) resources, which exploit warm water in the shallow subsurface to provide heat either directly (as warm water) or indirectly (via heat exchange systems); and high enthalpy (high temperature) resources, which yield hot water, usually from deeper levels, that can be used to generate electricity. The potential for harnessing electricity from geothermal energy has long been recognised; the potentially substantial reserves, minimal environmental impact, and capacity to contribute continuously to base load electricity supply make it an extremely attractive prospect. The ongoing drive to develop renewable sources of energy, coupled with anticipated technological developments that will in future reduce the depth at which heat reservoirs are considered economically viable, means there is now a pressing need to know more about the deep geothermal energy potential in Scotland. This report contains the British Geological Survey (BGS) contribution to a collaborative project between AECOM and BGS to produce a qualitative assessment of deep geothermal energy potential in onshore Scotland for the Scottish Government. BGS‘s role is to provide the Stage One deliverable ―Identifying and assessing geothermal energy potential‖, comprising an assessment of areas in Scotland most likely to hold deep geothermal resources based on existing geological and geothermal data sets. The report is divided into two parts. Part 1 sets out the background to geothermal energy, describes the geological context, and presents an analysis of the size and accessibility of the heat resource in Scotland based on existing geothermal data. The potential for exploiting deep geothermal energy in three settings in inshore areas of Scotland (abandoned mine workings, Hot Sedimentary Aquifers, and Hot Dry Rocks) is examined in Part 2

    Using the past to constrain the future: how the palaeorecord can improve estimates of global warming

    Full text link
    Climate sensitivity is defined as the change in global mean equilibrium temperature after a doubling of atmospheric CO2 concentration and provides a simple measure of global warming. An early estimate of climate sensitivity, 1.5-4.5{\deg}C, has changed little subsequently, including the latest assessment by the Intergovernmental Panel on Climate Change. The persistence of such large uncertainties in this simple measure casts doubt on our understanding of the mechanisms of climate change and our ability to predict the response of the climate system to future perturbations. This has motivated continued attempts to constrain the range with climate data, alone or in conjunction with models. The majority of studies use data from the instrumental period (post-1850) but recent work has made use of information about the large climate changes experienced in the geological past. In this review, we first outline approaches that estimate climate sensitivity using instrumental climate observations and then summarise attempts to use the record of climate change on geological timescales. We examine the limitations of these studies and suggest ways in which the power of the palaeoclimate record could be better used to reduce uncertainties in our predictions of climate sensitivity.Comment: The final, definitive version of this paper has been published in Progress in Physical Geography, 31(5), 2007 by SAGE Publications Ltd, All rights reserved. \c{opyright} 2007 Edwards, Crucifix and Harriso

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Psychological stress and other potential triggers for recurrences of herpes simplex virus eye infections

    Get PDF
    Objective To assess psychological stress and other factors as possible triggers of ocular herpes simplex virus (HSV) recurrences. Design A prospective cohort study nested in a randomized, placebo-controlled, clinical trial. Setting Fifty-eight community-based or university sites. Participants Immunocompetent adults (N = 308), aged 18 years or older, with a documented history of ocular HSV disease in the prior year and observed for up to 15 months. Exposure Variables Psychological stress, systemic infection, sunlight exposure, menstrual period, contact lens wear, and eye injury recorded on a weekly log. The exposure period was considered to be the week before symptomatic onset of a recurrence. Main Outcome Measure The first documented recurrence of ocular HSV disease, with exclusion of cases in which the exposure week log was completed late after the onset of symptoms. Results Thirty-three participants experienced a study outcome meeting these criteria. Higher levels of psychological stress were not associated with an increased risk of recurrence (rate ratio, 0.58; 95% confidence interval, 0.32-1.05; P = .07). No association was found between any of the other exposure variables and recurrence. When an analysis was performed including only the recurrences (n = 26) for which the exposure week log was completed late and after symptom onset, there was a clear indication of retrospective overreporting of high stress (P = .03) and systemic infection (P = .01). Not excluding these cases could have produced incorrect conclusions due to recall bias. Conclusions Psychological stress does not appear to be a trigger of recurrences of ocular HSV disease. If not accounted for, recall bias can substantially overestimate the importance of factors that do not have a causal association with HSV infection

    The Convex Geometry of Linear Inverse Problems

    Get PDF
    In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming

    Body appreciation around the world: Measurement invariance of the Body Appreciation Scale-2 (BAS-2) across 65 nations, 40 languages, gender identities, and age.

    Get PDF
    The Body Appreciation Scale-2 (BAS-2) is a widely used measure of a core facet of the positive body image construct. However, extant research concerning measurement invariance of the BAS-2 across a large number of nations remains limited. Here, we utilised the Body Image in Nature (BINS) dataset - with data collected between 2020 and 2022 - to assess measurement invariance of the BAS-2 across 65 nations, 40 languages, gender identities, and age groups. Multi-group confirmatory factor analysis indicated that full scalar invariance was upheld across all nations, languages, gender identities, and age groups, suggesting that the unidimensional BAS-2 model has widespread applicability. There were large differences across nations and languages in latent body appreciation, while differences across gender identities and age groups were negligible-to-small. Additionally, greater body appreciation was significantly associated with higher life satisfaction, being single (versus being married or in a committed relationship), and greater rurality (versus urbanicity). Across a subset of nations where nation-level data were available, greater body appreciation was also significantly associated with greater cultural distance from the United States and greater relative income inequality. These findings suggest that the BAS-2 likely captures a near-universal conceptualisation of the body appreciation construct, which should facilitate further cross-cultural research. [Abstract copyright: Copyright © 2023 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Simultaneous energy and mass calibration of large-radius jets with the ATLAS detector using a deep neural network

    Get PDF
    The energy and mass measurements of jets are crucial tasks for the Large Hadron Collider experiments. This paper presents a new calibration method to simultaneously calibrate these quantities for large-radius jets measured with the ATLAS detector using a deep neural network (DNN). To address the specificities of the calibration problem, special loss functions and training procedures are employed, and a complex network architecture, which includes feature annotation and residual connection layers, is used. The DNN-based calibration is compared to the standard numerical approach in an extensive series of tests. The DNN approach is found to perform significantly better in almost all of the tests and over most of the relevant kinematic phase space. In particular, it consistently improves the energy and mass resolutions, with a 30% better energy resolution obtained for transverse momenta pT > 500 GeV

    The ATLAS trigger system for LHC Run 3 and trigger performance in 2022

    Get PDF
    The ATLAS trigger system is a crucial component of the ATLAS experiment at the LHC. It is responsible for selecting events in line with the ATLAS physics programme. This paper presents an overview of the changes to the trigger and data acquisition system during the second long shutdown of the LHC, and shows the performance of the trigger system and its components in the proton-proton collisions during the 2022 commissioning period as well as its expected performance in proton-proton and heavy-ion collisions for the remainder of the third LHC data-taking period (2022–2025)

    Observation of quantum entanglement with top quarks at the ATLAS detector

    Get PDF
    Entanglement is a key feature of quantum mechanics with applications in fields such as metrology, cryptography, quantum information and quantum computation. It has been observed in a wide variety of systems and length scales, ranging from the microscopic to the macroscopic. However, entanglement remains largely unexplored at the highest accessible energy scales. Here we report the highest-energy observation of entanglement, in top–antitop quark events produced at the Large Hadron Collider, using a proton–proton collision dataset with a centre-of-mass energy of √s = 13 TeV and an integrated luminosity of 140 inverse femtobarns (fb)−1 recorded with the ATLAS experiment. Spin entanglement is detected from the measurement of a single observable D, inferred from the angle between the charged leptons in their parent top- and antitop-quark rest frames. The observable is measured in a narrow interval around the top–antitop quark production threshold, at which the entanglement detection is expected to be significant. It is reported in a fiducial phase space defined with stable particles to minimize the uncertainties that stem from the limitations of the Monte Carlo event generators and the parton shower model in modelling top-quark pair production. The entanglement marker is measured to be D = −0.537 ± 0.002 (stat.) ± 0.019 (syst.) for 340 GeV < mtt < 380 GeV. The observed result is more than five standard deviations from a scenario without entanglement and hence constitutes the first observation of entanglement in a pair of quarks and the highest-energy observation of entanglement so far
    corecore