1,462 research outputs found
Effect of Adiabatic Phonons on Striped and Homogeneous Ground States
The effects of adiabatic phonons on a spin-fermion model for high T_c
cuprates are studied using numerical simulations. In the absence of
electron-phonon interactions (EPI), stripes in the ground state are observed
for certain dopings while homogeneous states are stabilized in other regions of
parameter space. Different modes of adiabatic phonons are added to the
Hamiltonian:breathing, shear and half-breathing modes. Diagonal and
off-diagonal electron-phonon couplings are considered. It is observed that
strong diagonal EPI generate stripes in previously homogeneous states, while in
striped ground states an increase in the diagonal couplings tends to stabilize
the stripes, inducing a gap in the density of states (DOS) and rendering the
ground state insulating. The off-diagonal terms, on the other hand, destabilize
the stripes creating inhomogeneous ground states with a pseudogap at the
chemical potential in the DOS. The breathing mode stabilizes static diagonal
stripes; while the half-breathing (shear) modes stabilize dynamical (localized)
vertical and horizontal stripes. The EPI induces decoherence of the
quasi-particle peaks in the spectral functions.Comment: latex, 9 pages,13 figure
Oil Price Shocks and U.S. Economic Activity: An International Perspective
Oil price shocks are thought to have played a prominent role in U.S. economic activity. In this paper, we employ Bayesian methods with a dynamic stochastic general equilibrium model of world economic activity to identify the various sources of oil price shocks and economic fluctuation and to assess their effects on U.S. economic activity. We find that changes in oil prices are best understood as endogenous. Oil price shocks in the 1970s and early 1980s and the 2000s reflect differing mixes of shifts in oil supply and demand, and differing sources of oil price shocks have differing effects on economic activity. We also find that U.S. output fluctuations owe mostly to domestic shocks, with productivity shocks contributing to weakness in the 1970s and 1980s and strength in the 2000s.oil price, international business cycles, general equilibrium, Bayesian estimation
Deliverability and regional pricing in U.S. natural gas markets
During the 1980s and early '90s, interstate natural gas markets in the United States made a transition away from the regulation that characterized the previous three decades. With abundant supplies and plentiful pipeline capacity, a new order emerged in which freer markets and arbitrage closely linked natural gas price movements throughout the country. After the mid-1990s, however, U.S. natural gas markets tightened and some pipelines were pushed to capacity. We look for the pricing effects of limited arbitrage through causality testing between prices at nodes on the U.S. natural gas transportation system and interchange prices at regional nodes on North American electricity grids. Our tests do reveal limited arbitrage, which is indicative of bottlenecks in the U.S. natural gas pipeline system.Natural gas ; Arbitrage ; Pricing
Analytic philosophy for biomedical research: the imperative of applying yesterday's timeless messages to today's impasses
The mantra that "the best way to predict the future is to invent it" (attributed to the computer scientist Alan Kay) exemplifies some of the expectations from the technical and innovative sides of biomedical research at present. However, for technical advancements to make real impacts both on patient health and genuine scientific understanding, quite a number of lingering challenges facing the entire spectrum from protein biology all the way to randomized controlled trials should start to be overcome. The proposal in this chapter is that philosophy is essential in this process. By reviewing select examples from the history of science and philosophy, disciplines which were indistinguishable until the mid-nineteenth century, I argue that progress toward the many impasses in biomedicine can be achieved by emphasizing theoretical work (in the true sense of the word 'theory') as a vital foundation for experimental biology. Furthermore, a philosophical biology program that could provide a framework for theoretical investigations is outlined
Systematic review and meta-analysis of the diagnostic accuracy of ultrasonography for deep vein thrombosis
Background
Ultrasound (US) has largely replaced contrast venography as the definitive diagnostic test for deep vein thrombosis (DVT). We aimed to derive a definitive estimate of the diagnostic accuracy of US for clinically suspected DVT and identify study-level factors that might predict accuracy.
Methods
We undertook a systematic review, meta-analysis and meta-regression of diagnostic cohort studies that compared US to contrast venography in patients with suspected DVT. We searched Medline, EMBASE, CINAHL, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register, Database of Reviews of Effectiveness, the ACP Journal Club, and citation lists (1966 to April 2004). Random effects meta-analysis was used to derive pooled estimates of sensitivity and specificity. Random effects meta-regression was used to identify study-level covariates that predicted diagnostic performance.
Results
We identified 100 cohorts comparing US to venography in patients with suspected DVT. Overall sensitivity for proximal DVT (95% confidence interval) was 94.2% (93.2 to 95.0), for distal DVT was 63.5% (59.8 to 67.0), and specificity was 93.8% (93.1 to 94.4). Duplex US had pooled sensitivity of 96.5% (95.1 to 97.6) for proximal DVT, 71.2% (64.6 to 77.2) for distal DVT and specificity of 94.0% (92.8 to 95.1). Triplex US had pooled sensitivity of 96.4% (94.4 to 97.1%) for proximal DVT, 75.2% (67.7 to 81.6) for distal DVT and specificity of 94.3% (92.5 to 95.8). Compression US alone had pooled sensitivity of 93.8 % (92.0 to 95.3%) for proximal DVT, 56.8% (49.0 to 66.4) for distal DVT and specificity of 97.8% (97.0 to 98.4). Sensitivity was higher in more recently published studies and in cohorts with higher prevalence of DVT and more proximal DVT, and was lower in cohorts that reported interpretation by a radiologist. Specificity was higher in cohorts that excluded patients with previous DVT. No studies were identified that compared repeat US to venography in all patients. Repeat US appears to have a positive yield of 1.3%, with 89% of these being confirmed by venography.
Conclusion
Combined colour-doppler US techniques have optimal sensitivity, while compression US has optimal specificity for DVT. However, all estimates are subject to substantial unexplained heterogeneity. The role of repeat scanning is very uncertain and based upon limited data
Metabolic determinants of cancer cell sensitivity to glucose limitation and biguanides
As the concentrations of highly consumed nutrients, particularly glucose, are generally lower in tumours than in normal tissues1,2, cancer cells must adapt their metabolism to the tumour microenvironment. A better understanding of these adaptations might reveal cancer cell liabilities that can be exploited for therapeutic benefit. Here, we developed a continuous flow culture apparatus (Nutrostat) for maintaining proliferating cells in low nutrient media for long periods of time and used it to undertake competitive proliferation assays on a pooled collection of barcoded cancer cell lines cultured in low glucose conditions. Sensitivity to low glucose varies amongst cell lines, and an RNAi screen pinpointed mitochondrial oxidative phosphorylation (OXPHOS) as the major pathway required for optimal proliferation in low glucose. We found that cell lines most sensitive to low glucose are defective in the upregulation of OXPHOS normally caused by glucose limitation as a result of either mtDNA mutations in Complex I genes or impaired glucose utilization. These defects predict sensitivity to biguanides, anti-diabetic drugs that inhibit OXPHOS3,4, when cancer cells are grown in low glucose or as tumour xenografts. Remarkably, the biguanide sensitivity of cancer cells with mtDNA mutations was reversed by ectopic expression of yeast NDI1, a ubiquinone oxidoreductase that allows bypass of Complex I function5. Thus, we conclude that mtDNA mutations and impaired glucose utilization are potential biomarkers for identifying tumours with increased sensitivity to OXPHOS inhibitors
Gamma radiation induces hydrogen absorption by copper in water
One of the most intricate issues of nuclear power is the long-term safety of repositories for radioactive waste. These repositories can have an impact on future generations for a period of time orders of magnitude longer than any known civilization. Several countries have considered copper as an outer corrosion barrier for canisters containing spent nuclear fuel. Among the many processes that must be considered in the safety assessments, radiation induced processes constitute a key-component. Here we show that copper metal immersed in water uptakes considerable amounts of hydrogen when exposed to γ-radiation. Additionally we show that the amount of hydrogen absorbed by copper depends on the total dose of radiation. At a dose of 69 kGy the uptake of hydrogen by metallic copper is 7 orders of magnitude higher than when the absorption is driven by H2(g) at a pressure of 1 atm in a non-irradiated dry system. Moreover, irradiation of copper in water causes corrosion of the metal and the formation of a variety of surface cavities, nanoparticle deposits, and islands of needle-shaped crystals. Hence, radiation enhanced uptake of hydrogen by spent nuclear fuel encapsulating materials should be taken into account in the safety assessments of nuclear waste repositories.Peer reviewe
Macroscopic and microscopic dynamics of a pedestrian cross-flow: Part I, experimental analysis
In this work we investigate the behaviour of a human crowd in a cross-flow by analysing the results of a set of controlled experiments in which subjects were divided into two groups, organised in such a way to explore different density settings, and asked to walk through the crossing area. We study the results of the experiment by defining and investigating a few macroscopic and microscopic observables. Along with analysing traditional indicators such as density and velocity, whose dynamics was, to the extent of our knowledge, poorly understood for this setting, we pay particular attention to walking and body orientation, studying how these microscopic observables are influenced by density. Furthermore, we report a preliminary but quantitative analysis on the emergence of self-organising patterns (stripes) in the crossing area, a phenomenon that had been previously qualitatively reported for human crowds, and reproduced in models, but whose quantitative analysis with
respect to density conditions is, again according to our knowledge, a novel contribution
Visual mismatch negativity to masked stimuli presented at very brief presentation rates
Mismatch Negativity (MMN) has been characterised as a ‘pre-attentive’ component of an event-related potential (ERP) that is related to discrimination and error prediction processes. The aim of the current experiment was to establish whether visual MMN could be recorded to briefly presented, backward and forward masked visual stimuli, given both below and above levels of subjective experience. Evidence of visual MMN elicitation in the absence of the ability to consciously report stimuli would provide strong evidence for the automaticity of the visual MMN mechanism. Using an oddball paradigm, two stimuli that differed in orientation from each other, an + and an x were presented on a computer screen. Electroencephalogram (EEG) was recorded from nine participants (six females), mean age 21.4 years. Results showed that for stimuli that were effectively masked at 7ms presentation, there was little variation in the ERPs evoked to standard and deviant stimuli or in the subtraction waveform employed to delineate the visual MMN. At 14 ms stimulus presentation, when participants were able to report stimulus presence, an enhanced negativity at around 175 ms and 305 ms was observed to the deviant and was evident in the subtraction waveform. Although some of the difference observed in the ERPs can be attributed to stimulus characteristics, the use of a ‘lonely’ deviant protocol revealed attenuated visual MMN components at 14 ms stimulus presentation. Overall, results suggest that some degree of conscious attention is required before visual MMN components emerge, suggesting visual MMN is not an entirely pre-attentive process
A pure number to assess “congestion” in pedestrian crowds
The development of technologies for reliable tracking of pedestrian trajectories in public spaces has recently enabled collecting large data sets and real-time information about the usage of urban space and indoor facilities by human crowds. Such an information, nevertheless, may be properly used only with the aid of theoretical and computational tools to assess the state of the crowd. As shown in this work, traditional assessment metrics such as density and flow may provide only a partial information, since it is also important to understand how “regular” these flows are, as spatially uniform flows are arguably less problematic than strongly fluctuating ones.
Recently, the Congestion Level (CL), based on the computation of spatial variation of the rotor of the crowd velocity field, has been proposed as an assessment metric to evaluate the state of the crowd. Nevertheless, the definition was lacking sound theoretical foundations and, more importantly, was of very difficult interpretation (it was difficult to understand “what” was measuring). As we believe that such theoretical shortcomings were limiting also its relevance to applied studies, in this work we clarify some aspects concerning the definition, and we show that such an assessment metric may be improved by defining a dimensionless Congestion Number (CN).
As a first application of the newly defined
indicator we first focus on the cross-flow scenario and, by using discrete and continuous toy models, idealised “limit scenarios”, more realistic simulations and finally data from experiments with human participants, we show that corresponds to a crowd with a regular and safe motion (even in high density and high flow settings), while indicates the emergence of a congested and possibly dangerous condition. We finally use the indicator to analyse and discuss different settings such as bottlenecks, uni-, bi- and multi-directional flows, and real-world data concerning the movement of pedestrians in the world’s busiest railway station
- …
