287 research outputs found

    Kinetically Trapped Liquid-State Conformers of a Sodiated Model Peptide Observed in the Gas Phase

    Get PDF
    We investigate the peptide AcPheAla5LysH+, a model system for studying helix formation in the gas phase, in order to fully understand the forces that stabilize the helical structure. In particular, we address the question of whether the local fixation of the positive charge at the peptide's C-terminus is a prerequisite for forming helices by replacing the protonated C-terminal Lys residue by Ala and a sodium cation. The combination of gas-phase vibrational spectroscopy of cryogenically cooled ions with molecular simulations based on density-functional theory (DFT) allows for detailed structure elucidation. For sodiated AcPheAla6, we find globular rather than helical structures, as the mobile positive charge strongly interacts with the peptide backbone and disrupts secondary structure formation. Interestingly, the global minimum structure from simulation is not present in the experiment. We interpret that this is due to high barriers involved in re-arranging the peptide-cation interaction that ultimately result in kinetically trapped structures being observed in the experiment.Comment: 28 pages, 10 figure

    sfctools - A toolbox for stock-flow consistent, agent-based models.

    Get PDF
    One of the most challenging tasks in macroeconomic models is to describe the macro-level effects from the behavior of meso- or micro-level actors. Whereas in 1759, Adam Smith was still making use of the concept of an 'invisible hand' ensuring market stability and economic welfare (Rothschild, 1994), a more and more popular approach is to make the 'invisible' visible and to accurately model each actor individually by defining its behavioral rules and myopic knowledge domain (Castelfranchi, 2014; Cincotti et al., 2022). In agent-based computational economics (ACE), economic actors correspond to dynamically interacting entities, implemented as agents in a computer software (Axtell & Farmer, 2022; Klein et al., 2018; Tesfatsion, 2002). Such agent-based modeling (ABM) is a powerful approach utilized in economic simulations to generate complex dynamics, endogenous business cycles and market disequilibria. For many research topics, it is useful to combine agent-based modeling with the stock-flow consistent (SFC) paradigm (Caiani et al., 2016; Caverzasi & Godin, 2015; Nikiforos & Zezza, 2018). This architecture ensures there are no 'black holes', i.e. inconsistent sources or sinks, in an economic model. SFC-ABM models, however, are often intransparent and rely on very peculiar, custom-built data structures, thus hampering accessibility (Bandini et al., 2009; Hansen et al., 2019). A tedious task is to generate, maintain and distribute code for ABM, as well as to check for the inner consistency and logic of such models

    Implications of sampling design and sample size for national carbon accounting systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources.</p> <p>Results</p> <p>We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives.</p> <p>Conclusions, brief summary and potential implications</p> <p>Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.</p

    Equatorial Pacific productivity changes near the Eocene-Oligocene boundary

    Get PDF
    There is general agreement that productivity in high latitudes increased in the late Eocene and remained high in the early Oligocene. Evidence for both increased and decreased productivity across the Eocene-Oligocene transition (EOT) in the tropics has been presented, usually based on only one paleoproductivity proxy and often in sites with incomplete recovery of the EOT itself. A complete record of the Eocene-Oligocene transition was obtained at three drill sites in the eastern equatorial Pacific Ocean (ODP Site 1218 and IODP Sites U1333 and U1334). Four paleoproductivity proxies that have been examined at these sites, together with carbon and oxygen isotope measurements on early Oligocene planktonic foraminifera, give evidence of ecologic and oceanographic change across this climatically important boundary. Export productivity dropped sharply in the basal Oligocene (~33.7�Ma) and only recovered several hundred thousand years later; however, overall paleoproductivity in the early Oligocene never reached the average levels found in the late Eocene and in more modern times. Changes in the isotopic gradients between deep- and shallow-living planktonic foraminifera suggest a gradual shoaling of the thermocline through the early Oligocene that, on average, affected accumulation rates of barite, benthic foraminifera, and opal, as well as diatom abundance near 33.5�Ma. An interval with abundant large diatoms beginning at 33.3�Ma suggests an intermediate thermocline depth, which was followed by further shoaling, a dominance of smaller diatoms, and an increase in average primary productivity as estimated from accumulation rates of benthic foraminifera

    Channel Analysis for an OFDM-MISO Train Communications System Using Different Antennas

    Get PDF

    Survival analysis for AdVerse events with VarYing follow-up times (SAVVY) -- comparison of adverse event risks in randomized controlled trials

    Full text link
    Analyses of adverse events (AEs) are an important aspect of the evaluation of experimental therapies. The SAVVY (Survival analysis for AdVerse events with Varying follow-up times) project aims to improve the analyses of AE data in clinical trials through the use of survival techniques appropriately dealing with varying follow-up times, censoring, and competing events (CE). In an empirical study including seventeen randomized clinical trials the effect of varying follow-up times, censoring, and competing events on comparisons of two treatment arms with respect to AE risks is investigated. The comparisons of relative risks (RR) of standard probability-based estimators to the gold-standard Aalen-Johansen estimator or hazard-based estimators to an estimated hazard ratio (HR) from Cox regression are done descriptively, with graphical displays, and using a random effects meta-analysis on AE level. The influence of different factors on the size of the bias is investigated in a meta-regression. We find that for both, avoiding bias and categorization of evidence with respect to treatment effect on AE risk into categories, the choice of the estimator is key and more important than features of the underlying data such as percentage of censoring, CEs, amount of follow-up, or value of the gold-standard RR. There is an urgent need to improve the guidelines of reporting AEs so that incidence proportions are finally replaced by the Aalen-Johansen estimator - rather than by Kaplan-Meier - with appropriate definition of CEs. For RRs based on hazards, the HR based on Cox regression has better properties than the ratio of incidence densities

    Self-Calibration Technique for 3-point Intrinsic Alignment Correlations in Weak Lensing Surveys

    Full text link
    The intrinsic alignment (IA) of galaxies has been shown to be a significant barrier to precision cosmic shear measurements. (Zhang, 2010, ApJ, 720, 1090) proposed a self-calibration technique for the power spectrum to calculate the induced gravitational shear-galaxy intrinsic ellipticity correlation (GI) in weak lensing surveys with photo-z measurements which is expected to reduce the IA contamination by at least a factor of 10 for currently proposed surveys. We confirm this using an independent analysis and propose an expansion to the self-calibration technique for the bispectrum in order to calculate the dominant IA gravitational shear-gravitational shear-intrinsic ellipticity correlation (GGI) contamination. We first establish an estimator to extract the galaxy density-density-intrinsic ellipticity (ggI) correlation from the galaxy ellipticity-density-density measurement for a photo-z galaxy sample. We then develop a relation between the GGI and ggI bispectra, which allows for the estimation and removal of the GGI correlation from the cosmic shear signal. We explore the performance of these two methods, compare to other possible sources of error, and show that the GGI self-calibration technique can potentially reduce the IA contamination by up to a factor of 5-10 for all but a few bin choices, thus reducing the contamination to the percent level. The self-calibration is less accurate for adjacent bins, but still allows for a factor of three reduction in the IA contamination. The self-calibration thus promises to be an efficient technique to isolate both the 2-point and 3-point intrinsic alignment signals from weak lensing measurements.Comment: 22 pages, 5 figures, matches version published in MNRAS. Published online December 5, 201
    • …
    corecore