296 research outputs found
A multi-centre cohort study evaluating the role of Inflammatory Markers In patient’s presenting with acute ureteric Colic (MIMIC)
BACKGROUND:
Spontaneous Stone Passage (SSP) rates in acute ureteric colic range from 47–75%. There is conflicting evidence on the role of raised inflammatory markers in acute ureteric colic. The use of an easily applicable biomarker that could predict SSP or need for intervention would improve the management of obstructing ureteric stones. Thus, there is a need to determine in an appropriately powered study, in patients who are initially managed conservatively, which factors at the time of acute admission can predict subsequent patient outcome such as SSP and the need for intervention. Particularly, establishing whether levels of white cell count (WBC) at presentation are associated with likelihood of SSP or intervention may guide clinicians on the management of these patients’ stones.
DESIGN:
Multi-center cohort study disseminated via the UK British Urology Researchers in Surgical Training (BURST) and Australian Young Urology Researchers Organisation (YURO).
PRIMARY RESEARCH QUESTION:
What is the association between WBC and SSP in patients discharged from emergency department after initial conservative management?
PATIENT POPULATION:
Patients who have presented with acute renal colic with CT KUB evidence of a solitary ureteric stone. A minimum sample size of 720 patients across 15 centres will be needed.
HYPOTHESIS:
A raised WBC is associated with decreased odds of spontaneous stone passage.
PRIMARY OUTCOME:
The occurrence of SSP within six months of presentation with acute ureteric colic (YES/NO). SSP was defined as absence of need for intervention to assist stone passage
STATISTICAL ANALYSIS PLAN:
A multivariable logistic regression model will be constructed, where the outcome of interest is SSP using data from patients who do not undergo intervention at presentation. A random effect will be used to account for clustering of patients within hospitals/institutions. The model will include adjustments for gender, age as control variables
Managing healthcare budgets in times of austerity: the role of program budgeting and marginal analysis
Given limited resources, priority setting or choice making will remain a reality at all levels of publicly funded healthcare across countries for many years to come. The pressures may well be even more acute as the impact of the economic crisis of 2008 continues to play out but, even as economies begin to turn around, resources within healthcare will be limited, thus some form of rationing will be required. Over the last few decades, research on healthcare priority setting has focused on methods of implementation as well as on the development of approaches related to fairness and legitimacy and on more technical aspects of decision making including the use of multi-criteria decision analysis. Recently, research has led to better understanding of evaluating priority setting activity including defining ‘success’ and articulating key elements for high performance. This body of research, however, often goes untapped by those charged with making challenging decisions and as such, in line with prevailing public sector incentives, decisions are often reliant on historical allocation patterns and/or political negotiation. These archaic and ineffective approaches not only lead to poor decisions in terms of value for money but further do not reflect basic ethical conditions that can lead to fairness in the decision-making process. The purpose of this paper is to outline a comprehensive approach to priority setting and resource allocation that has been used in different contexts across countries. This will provide decision makers with a single point of access for a basic understanding of relevant tools when faced with having to make difficult decisions about what healthcare services to fund and what not to fund. The paper also addresses several key issues related to priority setting including how health technology assessments can be used, how performance can be improved at a practical level, and what ongoing resource management practice should look like. In terms of future research, one of the most important areas of priority setting that needs further attention is how best to engage public members
What is a hospital bed day worth? A contingent valuation study of hospital Chief Executive Officers
BACKGROUND: Decreasing hospital length of stay, and so freeing up hospital beds, represents an important cost saving which is often used in economic evaluations. The savings need to be accurately quantified in order to make optimal health care resource allocation decisions. Traditionally the accounting cost of a bed is used. We argue instead that the economic cost of a bed day is the better value for making resource decisions, and we describe our valuation method and estimations for costing this important resource. METHODS: We performed a contingent valuation using 37 Australian Chief Executive Officers’ (CEOs) willingness to pay (WTP) to release bed days in their hospitals, both generally and using specific cases. We provide a succinct thematic analysis from qualitative interviews post survey completion, which provide insight into the decision making process. RESULTS: On average CEOs are willing to pay a marginal rate of 436 for an Intensive Care Unit (ICU) bed day, with estimates of uncertainty being greater for ICU beds. These estimates are significantly lower (four times for ward beds and seven times for ICU beds) than the traditional accounting costs often used. Key themes to emerge from the interviews include the importance of national funding and targets, and their associated incentive structures, as well as the aversion to discuss bed days as an economic resource. CONCLUSIONS: This study highlights the importance for valuing bed days as an economic resource to inform cost effectiveness models and thus improve hospital decision making and resource allocation. Significantly under or over valuing the resource is very likely to result in sub-optimal decision making. We discuss the importance of recognising the opportunity costs of this resource and highlight areas for future research. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s12913-017-2079-5) contains supplementary material, which is available to authorized users
The Effect of Nicotine on Reproduction and Attachment of Human Gingival Fibroblasts In Vitro
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/142127/1/jper0658.pd
Allozyme variability in populations of trout (Salmo trutta) from the rivers of Russia and Iran
Lepton Acceleration in Pulsar Wind Nebulae
Pulsar Wind Nebulae (PWNe) act as calorimeters for the relativistic pair
winds emanating from within the pulsar light cylinder. Their radiative
dissipation in various wavebands is significantly different from that of their
pulsar central engines: the broadband spectra of PWNe possess characteristics
distinct from those of pulsars, thereby demanding a site of lepton acceleration
remote from the pulsar magnetosphere. A principal candidate for this locale is
the pulsar wind termination shock, a putatively highly-oblique,
ultra-relativistic MHD discontinuity. This paper summarizes key characteristics
of relativistic shock acceleration germane to PWNe, using predominantly Monte
Carlo simulation techniques that compare well with semi-analytic solutions of
the diffusion-convection equation. The array of potential spectral indices for
the pair distribution function is explored, defining how these depend
critically on the parameters of the turbulent plasma in the shock environs.
Injection efficiencies into the acceleration process are also addressed.
Informative constraints on the frequency of particle scattering and the level
of field turbulence are identified using the multiwavelength observations of
selected PWNe. These suggest that the termination shock can be comfortably
invoked as a principal injector of energetic leptons into PWNe without
resorting to unrealistic properties for the shock layer turbulence or MHD
structure.Comment: 19 pages, 5 figures, invited review to appear in Proc. of the
inaugural ICREA Workshop on "The High-Energy Emission from Pulsars and their
Systems" (2010), eds. N. Rea and D. Torres, (Springer Astrophysics and Space
Science series
Distribution and Phylogeny of EFL and EF-1α in Euglenozoa Suggest Ancestral Co-Occurrence Followed by Differential Loss
BACKGROUND: The eukaryotic elongation factor EF-1alpha (also known as EF1A) catalyzes aminoacyl-tRNA binding by the ribosome during translation. Homologs of this essential protein occur in all domains of life, and it was previously thought to be ubiquitous in eukaryotes. Recently, however, a number of eukaryotes were found to lack EF-1alpha and instead encode a related protein called EFL (for EF-Like). EFL-encoding organisms are scattered widely across the tree of eukaryotes, and all have close relatives that encode EF-1alpha. This intriguingly complex distribution has been attributed to multiple lateral transfers because EFL's near mutual exclusivity with EF-1alpha makes an extended period of co-occurrence seem unlikely. However, differential loss may play a role in EFL evolution, and this possibility has been less widely discussed. METHODOLOGY/PRINCIPAL FINDINGS: We have undertaken an EST- and PCR-based survey to determine the distribution of these two proteins in a previously under-sampled group, the Euglenozoa. EF-1alpha was found to be widespread and monophyletic, suggesting it is ancestral in this group. EFL was found in some species belonging to each of the three euglenozoan lineages, diplonemids, kinetoplastids, and euglenids. CONCLUSIONS/SIGNIFICANCE: Interestingly, the kinetoplastid EFL sequences are specifically related despite the fact that the lineages in which they are found are not sisters to one another, suggesting that EFL and EF-1alpha co-occurred in an early ancestor of kinetoplastids. This represents the strongest phylogenetic evidence to date that differential loss has contributed to the complex distribution of EFL and EF-1alpha
Characterization of transcription within sdr region of Staphylococcus aureus
Staphylococcus aureus is an opportunistic pathogen responsible for various infections in humans and animals. It causes localized and systemic infections, such as abscesses, impetigo, cellulitis, sepsis, endocarditis, bone infections, and meningitis. S. aureus virulence factors responsible for the initial contact with host cells (MSCRAMMs—microbial surface components recognizing adhesive matrix molecules) include three Sdr proteins. The presence of particular sdr genes is correlated with putative tissue specificity. The transcriptional organization of the sdr region remains unclear. We tested expression of the sdrC, sdrD, or sdrE genes in various in vitro conditions, as well as after contact with human blood. In this work, we present data suggesting a separation of the sdr region into three transcriptional units, based on their differential reactions to the environment. Differential reaction of the sdrD transcript to environmental conditions and blood suggests dissimilar functions of the sdr genes. SdrE has been previously proposed to play role in bone infections, whilst our results can indicate that sdrD plays a role in the interactions between the pathogen and human immune system, serum or specifically reacts to nutrients/other factors present in human blood
The Hubble Constant
I review the current state of determinations of the Hubble constant, which
gives the length scale of the Universe by relating the expansion velocity of
objects to their distance. There are two broad categories of measurements. The
first uses individual astrophysical objects which have some property that
allows their intrinsic luminosity or size to be determined, or allows the
determination of their distance by geometric means. The second category
comprises the use of all-sky cosmic microwave background, or correlations
between large samples of galaxies, to determine information about the geometry
of the Universe and hence the Hubble constant, typically in a combination with
other cosmological parameters. Many, but not all, object-based measurements
give values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc.
This is in mild discrepancy with CMB-based measurements, in particular those
from the Planck satellite, which give values of 67-68km/s/Mpc and typical
errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that
accuracy rather than precision is the remaining problem in a good determination
of the Hubble constant. Whether a discrepancy exists, and whether new physics
is needed to resolve it, depends on details of the systematics of the
object-based methods, and also on the assumptions about other cosmological
parameters and which datasets are combined in the case of the all-sky methods.Comment: Extensively revised and updated since the 2007 version: accepted by
Living Reviews in Relativity as a major (2014) update of LRR 10, 4, 200
- …