1,514 research outputs found
The PRISM4 (mid-Piacenzian) paleoenvironmental reconstruction
The mid-Piacenzian is known as a period of relative warmth when compared to the present day. A comprehensive understanding of conditions during the Piacenzian serves as both a conceptual model and a source for boundary conditions and means of verification of global climate model experiments. In this paper we present the PRISM4 reconstruction, a palaeoenvironmental reconstruction of the mid-Piacenzian (~3 Ma) containing data for palaeogeography, land and sea-ice, sea-surface temperature, vegetation, soils and lakes. Our retrodicted palaeogeography takes into account glacial isostatic adjustments and changes in dynamic topography. Soils and lakes, both significant as land surface features, are introduced to the PRISM reconstruction for the first time. Sea-surface temperature and vegetation reconstructions are unchanged but now have confidence assessments. The PRISM4 reconstruction is being used as boundary condition data for the Pliocene Model Intercomparison Project, Phase 2 (PlioMIP2) experiments
The dynamics of technology diffusion and the impacts of climate policy instruments in the decarbonisation of the global electricity sector
This paper presents an analysis of climate policy instruments for the decarbonisation of the global electricity sector in a non-equilibrium economic and technology diļ¬usion perspective. Energy markets are driven by innovation, path-dependent technology choices and diļ¬usion. However, conventional optimisation models lack detail on these aspects and have limited ability to address the eļ¬ectiveness of policy interventions because they do not represent decision-making. As a result, known eļ¬ects of technology lock-ins are liable to be underestimated. In contrast, our approach places
investor decision-making at the core of the analysis and investigates how it drives the diļ¬usion of low-carbon technology
in a highly disaggregated, hybrid, global macroeconometric model, FTT:Power-E3MG. Ten scenarios to 2050 of the electricity sector in 21 regions exploring combinations of electricity policy instruments are analysed, including their climate impacts. We show that in a diļ¬usion and path-dependent perspective, the impact of combinations of policies does not correspond to the sum of impacts of individual instruments: synergies exist between policy tools. We argue that the carbon price required to break the current fossil technology lock-in can be much lower when combined with other policies, and that a 90% decarbonisation of the electricity sector by 2050 is aļ¬ordable without early scrapping.This work
was supported by the Three Guineas Trust (A. M. Foley),
Cambridge Econometrics (H. Pollitt and U. Chewpreecha),
Conicyt (ComisiĆ³n Nacional de InvestigaciĆ³n CientĆļ¬ca y
TecnolĆ³gica, Gobierno de Chile) and the Ministerio de
EnergĆa, Gobierno de Chile (P. Salas), the EU Seventh
Framework Programme grant agreement No 265170 āER-MITAGEā (N. Edwards and P. Holden) and the UK Engineering and Physical Sciences Research Council, fellowship
number EP/K007254/1 (J.-F. Mercure).This is the final published version. It's also available from http://www.sciencedirect.com/science/article/pii/S0301421514004017#
Day-ahead allocation of operation reserve in composite power systems with large-scale centralized wind farms
This paper focuses on the day-ahead allocation of operation reserve considering wind power prediction error and network transmission constraints in a composite power system. A two-level model that solves the allocation problem is presented. The upper model allocates operation reserve among subsystems from the economic point of view. In the upper model, transmission constraints of tielines are formulated to represent limited reserve support from the neighboring system due to wind power fluctuation. The lower model evaluates the system on the reserve schedule from the reliability point of view. In the lower model, the reliability evaluation of composite power system is performed by using Monte Carlo simulation in a multi-area system. Wind power prediction errors and tieline constraints are incorporated. The reserve requirements in the upper model are iteratively adjusted by the resulting reliability indices from the lower model. Thus, the reserve allocation is gradually optimized until the system achieves the balance between reliability and economy. A modified two-area reliability test system (RTS) is analyzed to demonstrate the validity of the method.This work was supported by National Natural Science Foundation of China (No. 51277141) and National High Technology Research and Development Program of China (863 Program) (No. 2011AA05A103)
Hybrid Probabilistic Wind Power Forecasting Using Temporally Local Gaussian Process
The demand for sustainable development has resulted in a rapid growth in wind power worldwide. Although various approaches have been proposed to improve the accuracy and to overcome the uncertainties associated with traditional methods, the stochastic and variable nature of wind still remains the most challenging issue in accurately forecasting wind power. This paper presents a hybrid deterministicprobabilistic method where a temporally local moving window technique is used in Gaussian process (GP) to examine estimated forecasting errors. This temporally local GP employs less measurement data with faster and better predictions of wind power from two wind farms, one in the USA and the other in Ireland. Statistical analysis on the results shows that the method can substantially reduce the forecasting error while it is more likely to generate Gaussian-distributed residuals, particularly for short-term forecast horizons due to its capability to handle the time-varying characteristics of wind power
A Relativistic Type Ibc Supernova Without a Detected Gamma-ray Burst
Long duration gamma-ray bursts (GRBs) mark the explosive death of some
massive stars and are a rare sub-class of Type Ibc supernovae (SNe Ibc). They
are distinguished by the production of an energetic and collimated relativistic
outflow powered by a central engine (an accreting black hole or neutron star).
Observationally, this outflow is manifested in the pulse of gamma-rays and a
long-lived radio afterglow. To date, central engine-driven SNe have been
discovered exclusively through their gamma-ray emission, yet it is expected
that a larger population goes undetected due to limited satellite sensitivity
or beaming of the collimated emission away from our line-of-sight. In this
framework, the recovery of undetected GRBs may be possible through radio
searches for SNe Ibc with relativistic outflows. Here we report the discovery
of luminous radio emission from the seemingly ordinary Type Ibc SN 2009bb,
which requires a substantial relativistic outflow powered by a central engine.
The lack of a coincident GRB makes SN 2009bb the first engine-driven SN
discovered without a detected gamma-ray signal. A comparison with our extensive
radio survey of SNe Ibc reveals that the fraction harboring central engines is
low, ~1 percent, measured independently from, but consistent with, the inferred
rate of nearby GRBs. Our study demonstrates that upcoming optical and radio
surveys will soon rival gamma-ray satellites in pinpointing the nearest
engine-driven SNe. A similar result for a different supernova is reported
independently.Comment: To appear in Nature on Jan 28 2010. Embargoed for discussion in the
press until 13:00 US Eastern Time on Jan 27 (Accepted version, 27 pages,
Manuscript and Suppl. Info.
Spatial heterogeneity and peptide availability determine CTL killing efficiency in vivo
The rate at which a cytotoxic T lymphocyte (CTL) can survey for infected cells is a key ingredient of models of vertebrate immune responses to intracellular pathogens. Estimates have been obtained using in vivo cytotoxicity assays in which peptide-pulsed splenocytes are killed by CTL in the spleens of immunised mice. However the spleen is a heterogeneous environment and splenocytes comprise multiple cell types. Are some cell types intrinsically more susceptible to lysis than others? Quantitatively, what impacts are made by the spatial distribution of targets and effectors, and the level of peptide-MHC on the target cell surface? To address these questions we revisited the splenocyte killing assay, using CTL specific for an epitope of influenza virus. We found that at the cell population level T cell targets were killed more rapidly than B cells. Using modeling, quantitative imaging and in vitro killing assays we conclude that this difference in vivo likely reflects different migratory patterns of targets within the spleen and a heterogeneous distribution of CTL, with no detectable difference in the intrinsic susceptibilities of the two populations to lysis. Modeling of the stages involved in the detection and killing of peptide-pulsed targets in vitro revealed that peptide dose influenced the ability of CTL to form conjugates with targets but had no detectable effect on the probability that conjugation resulted in lysis, and that T cell targets took longer to lyse than B cells. We also infer that incomplete killing in vivo of cells pulsed with low doses of peptide may be due to a combination of heterogeneity in peptide uptake and the dissociation, but not internalisation, of peptide-MHC complexes. Our analyses demonstrate how population-averaged parameters in models of immune responses can be dissected to account for both spatial and cellular heterogeneity
An Anti-Glitch in a Magnetar
Magnetars are neutron stars showing dramatic X-ray and soft -ray
outbursting behaviour that is thought to be powered by intense internal
magnetic fields. Like conventional young neutron stars in the form of radio
pulsars, magnetars exhibit "glitches" during which angular momentum is believed
to be transferred between the solid outer crust and the superfluid component of
the inner crust. Hitherto, the several hundred observed glitches in radio
pulsars and magnetars have involved a sudden spin-up of the star, due
presumably to the interior superfluid rotating faster than the crust. Here we
report on X-ray timing observations of the magnetar 1E 2259+586 which we show
exhibited a clear "anti-glitch" -- a sudden spin down. We show that this event,
like some previous magnetar spin-up glitches, was accompanied by multiple X-ray
radiative changes and a significant spin-down rate change. This event, if of
origin internal to the star, is unpredicted in models of neutron star spin-down
and is suggestive of differential rotation in the neutron star, further
supporting the need for a rethinking of glitch theory for all neutron stars
The continuum limit of the static-light meson spectrum
We investigate the continuum limit of the low lying static-light meson
spectrum using Wilson twisted mass lattice QCD with N_f = 2 dynamical quark
flavours. We consider three values of the lattice spacing a ~ 0.051 fm, 0.064
fm, 0.080 fm and various values of the pion mass in the range 280 MeV < m_PS <
640 MeV. We present results in the continuum limit for light cloud angular
momentum j = 1/2, 3/2, 5/2 and for parity P = +, -. We extrapolate our results
to physical quark masses, make predictions regarding the spectrum of B and B_s
mesons and compare with available experimental results.Comment: 18 pages, 3 figure
Recommended from our members
A demonstration of 'broken' visual space
It has long been assumed that there is a distorted mapping between real and āperceivedā space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g.Ā AĀ >Ā BĀ > D yet also A < C < D) and hence no single one-to-one mapping between participantsā perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participantsā performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene
Observational and Physical Classification of Supernovae
This chapter describes the current classification scheme of supernovae (SNe).
This scheme has evolved over many decades and now includes numerous SN Types
and sub-types. Many of these are universally recognized, while there are
controversies regarding the definitions, membership and even the names of some
sub-classes; we will try to review here the commonly-used nomenclature, noting
the main variants when possible. SN Types are defined according to
observational properties; mostly visible-light spectra near maximum light, as
well as according to their photometric properties. However, a long-term goal of
SN classification is to associate observationally-defined classes with specific
physical explosive phenomena. We show here that this aspiration is now finally
coming to fruition, and we establish the SN classification scheme upon direct
observational evidence connecting SN groups with specific progenitor stars.
Observationally, the broad class of Type II SNe contains objects showing strong
spectroscopic signatures of hydrogen, while objects lacking such signatures are
of Type I, which is further divided to numerous subclasses. Recently a class of
super-luminous SNe (SLSNe, typically 10 times more luminous than standard
events) has been identified, and it is discussed. We end this chapter by
briefly describing a proposed alternative classification scheme that is
inspired by the stellar classification system. This system presents our
emerging physical understanding of SN explosions, while clearly separating
robust observational properties from physical inferences that can be debated.
This new system is quantitative, and naturally deals with events distributed
along a continuum, rather than being strictly divided into discrete classes.
Thus, it may be more suitable to the coming era where SN numbers will quickly
expand from a few thousands to millions of events.Comment: Extended final draft of a chapter in the "SN Handbook". Comments most
welcom
- ā¦