163 research outputs found
Ground-Penetrating Radar Velocity Determination and Precision Estimates Using Common-Mid-Point (CMP) Collection with Hand-Picking, Semblance Analysis, and Cross-Correlation Analysis: a Case Study and Tutorial for Archaeologists
The most crucial parameter to be determined in an archaeological ground-penetrating radar (GPR) survey is the velocity of the subsurface material. Precision velocity estimates comprise the basis for depth estimation, topographic correction and migration, and can therefore be the difference between spurious interpretations and/or efficient GPR-guided excavation with sound archaeological interpretation of the GPR results. Here, we examine the options available for determining the GPR velocity and for assessing the precision of velocity estimates from GPR data, using data collected at a small-scale iron-working site in Rhode Island, United States. In the case study, the initial velocity analysis of common-offset GPR profile data, using the popular method of hyperbola fitting, produced some unexpectedly high subsurface signal velocity estimates, while analysis of common midpoint (CMP) GPR data yielded a more reasonable subsurface signal velocity estimate. Several reflection analysis procedures for CMP data, including hand and automated signal picking using cross-correlation and semblance analysis, are used and discussed here in terms of efficiency of processing and yielded results. The case study demonstrates that CMP data may offer more accurate and precise velocity estimates than hyperbola fitting under certain field conditions, and that semblance analysis, though faster than hand-picking or cross-correlation, offers less precision
A 2:1 cocrystal of 6,13-dihydropentacene and pentacene
6,13-Dihydropentacene and pentacene cocrystallize in a ratio of 2:1, i.e. C22H16·0.5C22H14, during vapour transport of commercial pentacene in a gas flow. The crystal structure is monoclinic, space group P21/n, and contains one dihydropentacene molecule and half a pentacene molecule in the asymmetric unit.
Growth and Helicity of Noncentrosymmetric Cu<sub>2</sub>OSeO<sub>3</sub> Crystals
(Formula presented.) single crystals are grown with an optimized chemical vapor transport technique using (Formula presented.) as a transport agent (TA). The optimized growth method allows to selectively produce large high-quality single crystals. The method is shown to consistently produce (Formula presented.) crystals of maximum size 8 × 7 × 4 mm with a transport duration of around three weeks. It is found that this method, with (Formula presented.) as TA, is more efficient and simple compared with the commonly used growth techniques reported in literature with HCl gas as TA. The (Formula presented.) crystals have very high quality and their absolute structures are fully determined by simple single-crystal X-ray diffraction. Enantiomeric crystals with either left- or right-handed chiralities are observed. The magnetization and ferromagnetic resonance data show the same magnetic phase diagram as reported earlier.</p
Growth and Helicity of Noncentrosymmetric Cu<sub>2</sub>OSeO<sub>3</sub> Crystals
We have grown CuOSeO single crystals with an optimized chemical vapor
transport technique by using SeCl as a transport agent. Our optimized
growth method allows to selectively produce large high quality single crystals.
The method is shown to consistently produce CuOSeO crystals of maximum
size 8 mm x 7 mm x 4 mm with a transport duration of around three weeks. We
found this method, with SeCl as transport agent, more efficient and simple
compared to the commonly used growth techniques reported in literature with HCl
gas as transport agent. The CuOSeO crystals have very high quality and
the absolute structure are fully determined by simple single crystal x-ray
diffraction. We observed both type of crystals with left- and right-handed
chiralities. Our magnetization and ferromagnetic resonance data show the same
magnetic phase diagram as reported earlier
Cumulative Head Impact Burden in High School Football
Impacts to the head are common in collision sports such as football. Emerging research has begun to elucidate concussion tolerance levels, but sub-concussive impacts that do not result in clinical signs or symptoms of concussion are much more common, and are speculated to lead to alterations in cerebral structure and function later in life. We investigated the cumulative number of head impacts and their associated acceleration burden in 95 high school football players across four seasons of play using the Head Impact Telemetry System (HITS). The 4-year investigation resulted in 101,994 impacts collected across 190 practice sessions and 50 games. The number of impacts per 14-week season varied by playing position and starting status, with the average player sustaining 652 impacts. Linemen sustained the highest number of impacts per season (868); followed by tight ends, running backs, and linebackers (619); then quarterbacks (467); and receivers, cornerbacks, and safeties (372). Post-impact accelerations of the head also varied by playing position and starting status, with a seasonal linear acceleration burden of 16,746.1g, while the rotational acceleration and HIT severity profile burdens were 1,090,697.7-rad/sec2 and 10,021, respectively. The adolescent athletes in this study clearly sustained a large number of impacts to the head, with an impressive associated acceleration burden as a direct result of football participation. These findings raise concern about the relationship between sub-concussive head impacts incurred during football participation and late-life cerebral pathogenesis, and justify consideration of ways to best minimize impacts and mitigate cognitive declines.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90454/1/neu-2E2011-2E1825.pd
Constraints on growth index parameters from current and future observations
We use current and future simulated data of the growth rate of large scale
structure in combination with data from supernova, BAO, and CMB surface
measurements, in order to put constraints on the growth index parameters. We
use a recently proposed parameterization of the growth index that interpolates
between a constant value at high redshifts and a form that accounts for
redshift dependencies at small redshifts. We also suggest here another
exponential parameterization with a similar behaviour. The redshift dependent
parametrizations provide a sub-percent precision level to the numerical growth
function, for the full redshift range. Using these redshift parameterizations
or a constant growth index, we find that current available data from galaxy
redshift distortions and Lyman-alpha forests is unable to put significant
constraints on any of the growth parameters. For example both CDM and
flat DGP are allowed by current growth data. We use an MCMC analysis to study
constraints from future growth data, and simulate pessimistic and moderate
scenarios for the uncertainties. In both scenarios, the redshift
parameterizations discussed are able to provide significant constraints and
rule out models when incorrectly assumed in the analysis. The values taken by
the constant part of the parameterizations as well as the redshift slopes are
all found to significantly rule out an incorrect background. We also find that,
for our pessimistic scenario, an assumed constant growth index over the full
redshift range is unable to rule out incorrect models in all cases. This is due
to the fact that the slope acts as a second discriminator at smaller redshifts
and therefore provide a significant test to identify the underlying gravity
theory.Comment: 13 pages, 5 figures, matches JCAP accepted versio
A minimal set of invariants as a systematic approach to higher order gravity models: Physical and Cosmological Constraints
We compare higher order gravity models to observational constraints from
magnitude-redshift supernova data, distance to the last scattering surface of
the CMB, and Baryon Acoustic Oscillations. We follow a recently proposed
systematic approach to higher order gravity models based on minimal sets of
curvature invariants, and select models that pass some physical acceptability
conditions (free of ghost instabilities, real and positive propagation speeds,
and free of separatrices). Models that satisfy these physical and observational
constraints are found in this analysis and do provide fits to the data that are
very close to those of the LCDM concordance model. However, we find that the
limitation of the models considered here comes from the presence of
superluminal mode propagations for the constrained parameter space of the
models.Comment: 12 pages, 6 figure
Recommended from our members
All options, not silver bullets, needed to limit global warming to 1.5°C: a scenario appraisal
Climate science provides strong evidence of the necessity of limiting global warming to 1.5°C, in line with the Paris Climate Agreement. The IPCC 1.5°C special report (SR1.5) presents 414 emissions scenarios modelled for the report, of which around 50 are classified as '1.5°C scenarios', with no or low temperature overshoot. These emission scenarios differ in their reliance on individual mitigation levers, including reduction of global energy demand, decarbonisation of energy production, development of land-management systems, and the pace and scale of deploying carbon dioxide removal (CDR) technologies. The reliance of 1.5°C scenarios on these levers needs to be critically assessed in light of the potentials of the relevant technologies and roll-out plans. We use a set of five parameters to bundle and characterise the mitigation levers employed in the SR1.5 1.5°C scenarios. For each of these levers, we draw on the literature to define 'medium' and 'high' upper bounds that delineate between their 'reasonable', 'challenging' and 'speculative' use by mid century. We do not find any 1.5°C scenarios that stay within all medium upper bounds on the five mitigation levers. Scenarios most frequently 'over use' carbon dioxide removal with geological storage as a mitigation lever, whilst reductions of energy demand and carbon intensity of energy production are 'over used' less frequently. If we allow mitigation levers to be employed up to our high upper bounds, we are left with 22 of the SR1.5 1.5°C scenarios with no or low overshoot. The scenarios that fulfill these criteria are characterised by greater coverage of the available mitigation levers than those scenarios that exceed at least one of the high upper bounds. When excluding the two scenarios that exceed the SR1.5 carbon budget for limiting global warming to 1.5°C, this subset of 1.5°C scenarios shows a range of 15-22 Gt CO2 (16-22 Gt CO2 interquartile range) for emissions in 2030. For the year of reaching net zero CO2 emissions the range is 2039-2061 (2049-2057 interquartile range)
Analytical techniques for multiplex analysis of protein biomarkers
Introduction: The importance of biomarkers for pharmaceutical drug development and clinical diagnostics is more significant than ever in the current shift toward personalized medicine. Biomarkers have taken a central position either as companion markers to support drug development and patient selection, or as indicators aiming to detect the earliest perturbations indicative of disease, minimizing therapeutic intervention or even enabling disease reversal. Protein biomarkers are of particular interest given their central role in biochemical pathways. Hence, capabilities to analyze multiple protein biomarkers in one assay are highly interesting for biomedical research. Areas covered: We here review multiple methods that are suitable for robust, high throughput, standardized, and affordable analysis of protein biomarkers in a multiplex format. We describe innovative developments in immunoassays, the vanguard of methods in clinical laboratories, and mass spectrometry, increasingly implemented for protein biomarker analysis. Moreover, emerging techniques are discussed with potentially improved protein capture, separation, and detection that will further boost multiplex analyses. Expert commentary: The development of clinically applied multiplex protein biomarker assays is essential as multi-protein signatures provide more comprehensive information about biological systems than single biomarkers, leading to improved insights in mechanisms of disease, diagnostics, and the effect of personalized medicine
- …