985 research outputs found

    Exact ground states for the four-electron problem in a two-dimensional finite Hubbard square system

    Full text link
    We present exact explicit analytical results describing the exact ground state of four electrons in a two dimensional square Hubbard cluster containing 16 sites taken with periodic boundary conditions. The presented procedure, which works for arbitrary even particle number and lattice sites, is based on explicitly given symmetry adapted base vectors constructed in r-space. The Hamiltonian acting on these states generates a closed system of 85 linear equations providing by its minimum eigenvalue the exact ground state of the system. The presented results, described with the aim to generate further creative developments, not only show how the ground state can be exactly obtained and what kind of contributions enter in its construction, but emphasize further characteristics of the spectrum. On this line i) possible explications are found regarding why weak coupling expansions often provide a good approximation for the Hubbard model at intermediate couplings, or ii) explicitly given low lying energy states of the kinetic energy, avoiding double occupancy, suggest new roots for pairing mechanism attracting decrease in the kinetic energy, as emphasized by kinetic energy driven superconductivity theories.Comment: 37 pages, 18 figure

    The nature of NV absorbers at high redshift

    Full text link
    We present a study of NV absorption systems at 1.5 < z < 2.5 in the optical spectra of 19 QSOs. Our analysis includes both absorbers arising from the intergalactic medium as well as systems in the vicinity of the background quasar. We construct detailed photoionization models to study the physical conditions and abundances in the absorbers and to constrain the spectral hardness of the ionizing radiation. The rate of incidence for intervening NV components is dN/dz = 3.38 +/- 0.43, corresponding to dN/dX = 1.10 +/- 0.14. The column density distribution function is fitted by the slope beta = 1.89 +/- 0.22, consistent with measurements for CIV and OVI. The narrow line widths (b_NV ~ 6 km/s) imply photoionization rather than collisions as dominating ionization process. The column densities of CIV and NV are correlated but show different slopes for intervening and associated absorbers, which indicates different ionizing spectra. Associated systems are found to be more metal-rich, denser, and more compact than intervening absorbers. This conclusion is independent of the adopted ionizing radiation. For the intervening NV systems we find typical values of [C/H] ~ -0.6 and n_H ~ 10^-3.6 cm^-3, and sizes of a few kpc, while for associated NV absorbers we obtain [C/H] ~ +0.7, n_H ~ 10^-2.8 cm^-3, and sizes of several 10 pc. The abundance of nitrogen relative to carbon [N/C] and alpha-elements like oxygen and silicon [N/alpha] is correlated with [N/H], indicating the enrichment by secondary nitrogen. The larger scatter in [N/alpha] in intervening systems suggests an inhomogeneous enrichment of the IGM. There is an anti-correlation between [N/alpha] and [alpha/C], which could be used to constrain the initial mass function of the carbon- and nitrogen-producing stellar population.Comment: accepted by A&A, revised versio

    Early prediction of median survival among a large AIDS surveillance cohort

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>For individuals with AIDS, data exist relatively soon after diagnosis to allow estimation of "early" survival quantiles (<it>e.g.</it>, the 0.10, 0.15, 0.20 and 0.30 quantiles, etc.). Many years of additional observation must elapse before median survival, a summary measure of survival, can be estimated accurately. In this study, a new approach to predict AIDS median survival is presented and its accuracy tested using AIDS surveillance data.</p> <p>Methods</p> <p>The data consisted of 96,373 individuals who were reported to the HIV/AIDS Reporting System of the California Department of Health Services Office of AIDS as of December 31, 1996. We defined cohorts based on quarter year of diagnosis (<it>e.g.</it>, the "931" cohort consists of individuals diagnosed with AIDS in the first quarter of 1993). We used early quantiles (estimated using the Inverse Probability of Censoring Weighted estimator) of the survival distribution to estimate median survival by assuming a linear relationship between the earlier quantiles and median survival. From this model, median survival was predicted for cohorts for which a median could not be estimated empirically from the available data. This prediction was compared with the actual medians observed when using updated survival data reported at least five years later.</p> <p>Results</p> <p>Using the 0.15 quantile as the predictor and the data available as of December 31, 1996, we were able to predict the median survival of four cohorts (933, 934, 941, and 942) to be 34, 34, 31, and 29 months. Without this approach, there were insufficient data with which to make any estimate of median survival. The actual median survival of these four cohorts (using data as of December 31, 2001) was found to be 32, 40, 46, and 80 months, suggesting that the accuracy for this approach requires a minimum of three years to elapse from diagnosis to the time an accurate prediction can be made.</p> <p>Conclusion</p> <p>The results of this study suggest that early and accurate prediction of median survival time after AIDS diagnosis may be possible using early quantiles of the survival distribution. The methodology did not seem to work well during a period of significant change in survival as observed with highly active antiretroviral treatment, but results suggest that it may work well in a time of more gradual improvement in survival.</p

    From Nonspecific DNA–Protein Encounter Complexes to the Prediction of DNA–Protein Interactions

    Get PDF
    ©2009 Gao, Skolnick. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.doi:10.1371/journal.pcbi.1000341DNA–protein interactions are involved in many essential biological activities. Because there is no simple mapping code between DNA base pairs and protein amino acids, the prediction of DNA–protein interactions is a challenging problem. Here, we present a novel computational approach for predicting DNA-binding protein residues and DNA–protein interaction modes without knowing its specific DNA target sequence. Given the structure of a DNA-binding protein, the method first generates an ensemble of complex structures obtained by rigid-body docking with a nonspecific canonical B-DNA. Representative models are subsequently selected through clustering and ranking by their DNA–protein interfacial energy. Analysis of these encounter complex models suggests that the recognition sites for specific DNA binding are usually favorable interaction sites for the nonspecific DNA probe and that nonspecific DNA–protein interaction modes exhibit some similarity to specific DNA–protein binding modes. Although the method requires as input the knowledge that the protein binds DNA, in benchmark tests, it achieves better performance in identifying DNA-binding sites than three previously established methods, which are based on sophisticated machine-learning techniques. We further apply our method to protein structures predicted through modeling and demonstrate that our method performs satisfactorily on protein models whose root-mean-square Ca deviation from native is up to 5 Å from their native structures. This study provides valuable structural insights into how a specific DNA-binding protein interacts with a nonspecific DNA sequence. The similarity between the specific DNA–protein interaction mode and nonspecific interaction modes may reflect an important sampling step in search of its specific DNA targets by a DNA-binding protein

    Complex circular subsidence structures in tephra deposited on large blocks of ice: Varða tuff cone, Öræfajökull, Iceland

    Get PDF
    Several broadly circular structures up to 16 m in diameter, into which higher strata have sagged and locally collapsed, are present in a tephra outcrop on southwest Öræfajökull, southern Iceland. The tephra was sourced in a nearby basaltic tuff cone at Varða. The structures have not previously been described in tuff cones, and they probably formed by the melting out of large buried blocks of ice emplaced during a preceding jökulhlaup that may have been triggered by a subglacial eruption within the Öræfajökull ice cap. They are named ice-melt subsidence structures, and they are analogous to kettle holes that are commonly found in proglacial sandurs and some lahars sourced in ice-clad volcanoes. The internal structure is better exposed in the Varða examples because of an absence of fluvial infilling and reworking, and erosion of the outcrop to reveal the deeper geometry. The ice-melt subsidence structures at Varða are a proxy for buried ice. They are the only known evidence for a subglacial eruption and associated jökulhlaup that created the ice blocks. The recognition of such structures elsewhere will be useful in reconstructing more complete regional volcanic histories as well as for identifying ice-proximal settings during palaeoenvironmental investigations

    Quantitative assessment of pain-related thermal dysfunction through clinical digital infrared thermal imaging

    Get PDF
    BACKGROUND: The skin temperature distribution of a healthy human body exhibits a contralateral symmetry. Some nociceptive and most neuropathic pain pathologies are associated with an alteration of the thermal distribution of the human body. Since the dissipation of heat through the skin occurs for the most part in the form of infrared radiation, infrared thermography is the method of choice to study the physiology of thermoregulation and the thermal dysfunction associated with pain. Assessing thermograms is a complex and subjective task that can be greatly facilitated by computerised techniques. METHODS: This paper presents techniques for automated computerised assessment of thermal images of pain, in order to facilitate the physician's decision making. First, the thermal images are pre-processed to reduce the noise introduced during the initial acquisition and to extract the irrelevant background. Then, potential regions of interest are identified using fixed dermatomal subdivisions of the body, isothermal analysis and segmentation techniques. Finally, we assess the degree of asymmetry between contralateral regions of interest using statistical computations and distance measures between comparable regions. RESULTS: The wavelet domain-based Poisson noise removal techniques compared favourably against Wiener and other wavelet-based denoising methods, when qualitative criteria were used. It was shown to improve slightly the subsequent analysis. The automated background removal technique based on thresholding and morphological operations was successful for both noisy and denoised images with a correct removal rate of 85% of the images in the database. The automation of the regions of interest (ROIs) delimitation process was achieved successfully for images with a good contralateral symmetry. Isothermal division complemented well the fixed ROIs division based on dermatomes, giving a more accurate map of potentially abnormal regions. The measure of distance between histograms of comparable ROIs allowed us to increase the sensitivity and specificity rate for the classification of 24 images of pain patients when compared to common statistical comparisons. CONCLUSIONS: We developed a complete set of automated techniques for the computerised assessment of thermal images to assess pain-related thermal dysfunction

    Radiative Transfer for Exoplanet Atmospheres

    Full text link
    Remote sensing of the atmospheres of distant worlds motivates a firm understanding of radiative transfer. In this review, we provide a pedagogical cookbook that describes the principal ingredients needed to perform a radiative transfer calculation and predict the spectrum of an exoplanet atmosphere, including solving the radiative transfer equation, calculating opacities (and chemistry), iterating for radiative equilibrium (or not), and adapting the output of the calculations to the astronomical observations. A review of the state of the art is performed, focusing on selected milestone papers. Outstanding issues, including the need to understand aerosols or clouds and elucidating the assumptions and caveats behind inversion methods, are discussed. A checklist is provided to assist referees/reviewers in their scrutiny of works involving radiative transfer. A table summarizing the methodology employed by past studies is provided.Comment: 7 pages, no figures, 1 table. Filled in missing information in references, main text unchange

    Critical research gaps and translational priorities for the successful prevention and treatment of breast cancer

    Get PDF
    INTRODUCTION Breast cancer remains a significant scientific, clinical and societal challenge. This gap analysis has reviewed and critically assessed enduring issues and new challenges emerging from recent research, and proposes strategies for translating solutions into practice. METHODS More than 100 internationally recognised specialist breast cancer scientists, clinicians and healthcare professionals collaborated to address nine thematic areas: genetics, epigenetics and epidemiology; molecular pathology and cell biology; hormonal influences and endocrine therapy; imaging, detection and screening; current/novel therapies and biomarkers; drug resistance; metastasis, angiogenesis, circulating tumour cells, cancer 'stem' cells; risk and prevention; living with and managing breast cancer and its treatment. The groups developed summary papers through an iterative process which, following further appraisal from experts and patients, were melded into this summary account. RESULTS The 10 major gaps identified were: (1) understanding the functions and contextual interactions of genetic and epigenetic changes in normal breast development and during malignant transformation; (2) how to implement sustainable lifestyle changes (diet, exercise and weight) and chemopreventive strategies; (3) the need for tailored screening approaches including clinically actionable tests; (4) enhancing knowledge of molecular drivers behind breast cancer subtypes, progression and metastasis; (5) understanding the molecular mechanisms of tumour heterogeneity, dormancy, de novo or acquired resistance and how to target key nodes in these dynamic processes; (6) developing validated markers for chemosensitivity and radiosensitivity; (7) understanding the optimal duration, sequencing and rational combinations of treatment for improved personalised therapy; (8) validating multimodality imaging biomarkers for minimally invasive diagnosis and monitoring of responses in primary and metastatic disease; (9) developing interventions and support to improve the survivorship experience; (10) a continuing need for clinical material for translational research derived from normal breast, blood, primary, relapsed, metastatic and drug-resistant cancers with expert bioinformatics support to maximise its utility. The proposed infrastructural enablers include enhanced resources to support clinically relevant in vitro and in vivo tumour models; improved access to appropriate, fully annotated clinical samples; extended biomarker discovery, validation and standardisation; and facilitated cross-discipline working. CONCLUSIONS With resources to conduct further high-quality targeted research focusing on the gaps identified, increased knowledge translating into improved clinical care should be achievable within five years

    Graphene: A sub-nanometer trans-electrode membrane

    Get PDF
    Isolated, atomically thin conducting membranes of graphite, called graphene, have recently been the subject of intense research with the hope that practical applications in fields ranging from electronics to energy science will emerge. Here, we show that when immersed in ionic solution, a layer of graphene takes on new electrochemical properties that make it a trans-electrode. The trans-electrode's properties are the consequence of the atomic scale proximity of its two opposing liquid-solid interfaces together with graphene's well known in-plane conductivity. We show that several trans-electrode properties are revealed by ionic conductivity measurements on a CVD grown graphene membrane that separates two aqueous ionic solutions. Despite this membrane being only one to two atomic layers thick, we find it is a remarkable ionic insulator with a very small stable conductivity that depends on the ion species in solution. Electrical measurements on graphene membranes in which a single nanopore has been drilled show that the membrane's effective insulating thickness is less than one nanometer. This small effective thickness makes graphene an ideal substrate for very high-resolution, high throughput nanopore based single molecule detectors. Sensors based on modulation of graphene's in-plane electronic conductivity in response to trans-electrode environments and voltage biases will provide new insights into atomic processes at the electrode surfaces.Comment: Submitted 12 April 2010 to Nature, where it is under revie

    A randomised controlled trial to determine the effect on response of including a lottery incentive in health surveys [ISRCTN32203485]

    Get PDF
    BACKGROUND: Postal questionnaires are an economical and simple method of data collection for research purposes but are subject to non-response bias. Several studies have explored the effect of monetary and non-monetary incentives on response. Recent meta-analyses conclude that financial incentives are an effective way of increasing response rates. However, large surveys rarely have the resources to reward individual participants. Three previous papers report on the effectiveness of lottery incentives with contradictory results. This study aimed to determine the effect of including a lottery-style incentive on response rates to a postal health survey. METHODS: Randomised controlled trial. Setting: North and West Birmingham. 8,645 patients aged 18 or over randomly selected from registers of eight general practices (family physician practices). Intervention: Inclusion of a flyer and letter with a health questionnaire informing patients that returned questionnaires would be entered into a lottery-style draw for £100 of gift vouchers. Control: Health questionnaire accompanied only by standard letter of explanation. Main outcome measures: Response rate and completion rate to questionnaire. RESULTS: 5,209 individuals responded with identical rates in both groups (62.1%). Practice, patient age, sex and Townsend score (a postcode based deprivation measure) were identified as predictive of response, with higher response related to older age, being female and living in an area with a lower Townsend score (less deprived). CONCLUSION: This RCT, using a large community based sample, found that the offer of entry into a lottery style draw for £100 of High Street vouchers has no effect on response rates to a postal health questionnaire
    corecore