1,667 research outputs found

    Solar Protons and Magnetic Storms in July 1961

    Get PDF
    Injun i satellite observations of solar protons and magnetic storm

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde

    The spherical probe electric field and wave experiment

    Get PDF
    The experiment is designed to measure the electric field and density fluctuations with sampling rates up to 40,000 samples/sec. The description includes Langmuir sweeps that can be made to determine the electron density and temperature, the study of nonlinear processes that result in acceleration of plasma, and the analysis of large scale phenomena where all four spacecraft are needed

    Parathyroid hormone 1-34 and skeletal anabolic action: The use of parathyroid hormone in bone formation

    Get PDF
    Intermittently administered parathyroid hormone (PTH 1-34) has been shown to promote bone formation in both human and animal studies. The hormone and its analogues stimulate both bone formation and resorption, and as such at low doses are now in clinical use for the treatment of severe osteoporosis. By varying the duration of exposure, parathyroid hormone can modulate genes leading to increased bone formation within a so-called ‘anabolic window’. The osteogenic mechanisms involved are multiple, affecting the stimulation of osteoprogenitor cells, osteoblasts, osteocytes and the stem cell niche, and ultimately leading to increased osteoblast activation, reduced osteoblast apoptosis, upregulation of Wnt/β-catenin signalling, increased stem cell mobilisation, and mediation of the RANKL/OPG pathway. Ongoing investigation into their effect on bone formation through ‘coupled’ and ‘uncoupled’ mechanisms further underlines the impact of intermittent PTH on both cortical and cancellous bone. Given the principally catabolic actions of continuous PTH, this article reviews the skeletal actions of intermittent PTH 1-34 and the mechanisms underlying its effect

    A data quality control program for computer-assisted personal interviews

    Get PDF
    Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers&rsquo; perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.<br /

    The quantum cryptographic switch

    Full text link
    We illustrate using a quantum system the principle of a cryptographic switch, in which a third party (Charlie) can control to a continuously varying degree the amount of information the receiver (Bob) receives, after the sender (Alice) has sent her information. Suppose Charlie transmits a Bell state to Alice and Bob. Alice uses dense coding to transmit two bits to Bob. Only if the 2-bit information corresponding to choice of Bell state is made available by Charlie to Bob can the latter recover Alice's information. By varying the information he gives, Charlie can continuously vary the information recovered by Bob. The performance of the protocol subjected to the squeezed generalized amplitude damping channel is considered. We also present a number of practical situations where a cryptographic switch would be of use.Comment: 7 pages, 4 Figure

    Racial Disparities in Blood Pressure Control and Treatment Differences in a Medicaid Population, North Carolina, 2005-2006

    Get PDF
    Introduction: Racial disparities in prevalence and control of high blood pressure are well-documented. We studied blood pressure control and interventions received during the course of a year in a sample of black and white Medicaid recipients with high blood pressure and examined patient, provider, and treatment characteristics as potential explanatory factors for racial disparities in blood pressure control. Methods: We retrospectively reviewed the charts of 2,078 black and 1,436 white North Carolina Medicaid recipients who had high blood pressure managed in primary care practices from July 2005 through June 2006. Documented provider responses to high blood pressure during office visits during the prior year were reviewed. Results: Blacks were less likely than whites to have blood pressure at goal (43.6% compared with 50.9%, P = .001). Blacks above goal were more likely than whites above goal to have been prescribed 4 or more antihypertensive drug classes (24.7% compared with 13.4%, P < .001); to have had medication adjusted during the prior year (46.7% compared with 40.4%, P = .02); and to have a documented provider response to high blood pressure during office visits (35.7% compared with 30.0% of visits, P = .02). Many blacks (28.0%) and whites (34.3%) with blood pressure above goal had fewer than 2 antihypertensive drug classes prescribed. Conclusion: In this population with Medicaid coverage and access to primary care, blacks were less likely than whites to have their blood pressure controlled. Blacks received more frequent intervention and had greater use of combination antihypertensive therapy. Care patterns observed in the usual management of high blood pressure were not sufficient to achieve treatment goals or eliminate disparities

    Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’

    Get PDF
    While it is often said that in order to qualify as a true science robotics should aspire to reproducible and measurable results that allow benchmarking, I argue that a focus on benchmarking will be a hindrance for progress. Several academic disciplines that have been led into pursuing only reproducible and measurable ‘scientific’ results—robotics should be careful not to fall into that trap. Results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems independently of a specific context—so working towards progress on the technical measure risks missing that target. It would constitute aiming for the measure rather than the target: what I call ‘measure-target confusion’. The role of benchmarking in robotics shows that the more general problem to measure progress towards more intelligent machines will not be solved by technical benchmarks; we need a balanced approach with technical benchmarks, real-life testing and qualitative judgment
    corecore