161 research outputs found

    Identification of 13 DB + dM and 2 DC + dM binaries from the Sloan Digital Sky Survey

    Full text link
    We present the identification of 13 DB + dM binaries and 2 DC + dM binaries from the Sloan Digital Sky Survey (SDSS). Before the SDSS only 2 DB + dM binaries and 1 DC + dM binary were known. At least three, possibly 8, of the new DB + dM binaries seem to have white dwarf temperatures well above 30000 K which would place them in the so called DB-gap. Finding these DB white dwarfs in binaries may suggest that they have formed through a different evolutionary channel than the ones in which DA white dwarfs transform into DB white dwarfs due to convection in the upper layers.Comment: 4 pages, 2 figures, accepted for publication in A&A Letter

    Definition of the International Normalized Ratio (INR) and its consequences for the calibration procedure of thromboplastin preparations: a rebuttal

    Get PDF
    Reliable determination of the International Normalized Ratio (INR) is mandatory for the control of oral anticoagulant therapy. Determination of the INR is based on a calibration model adopted by the WHO In the WHO model, the international sensitivity index (ISI) plays a central role. The ISI of the first international reference preparation (IRP) 67/40 is 1.0 by definition. Attermann argued that the ISI of all other PT systems, including all secondary international standards, are not known but merely are estimated with inbuilt statistical error. In the WHO guidelines, INR is defined as follows: ÔFor a given plasma or whole blood specimen from a patient on long-term oral anticoagulant therapy, a value calculated from the prothrombin-time ratio using a prothrombin-time system with a known ISI according to the formula INR ¼ (PT/MNPT) ISI .Õ The word ÔknownÕ in this definition does not mean that there is no statistical uncertainty, but refers to the fact that the ISI estimate must be known in order to determine the INR. According to this definition, there is intrinsic uncertainty in the INR. INR therefore is not exact but an approximation that is sufficiently reliable in clinical terms. The above definition of INR is identical to the definition given by Kirkwood [3]. Attermann argued that the INR should be defined in a different way, namely as the PT ratio that would have been obtained if the same plasmas had been tested using the first IRP 67/40 with the manual tilt tube method. Attermann's alternative definition of INR cannot be used in daily practice because the first IRP 67/40 is no longer available. Furthermore, it should be realized that the first IRP 67/40 has never been used to find the optimal target intensities of anticoagulation in patients. Therapeutic ranges have been established by clinical trials using other thromboplastin reagents. These reagents were then linked to the INR scale by a series of ISI calibrations. The main purpose of the INR scale is to define therapeutic ranges. As the therapeutic ranges have been established with multiple reagents that are different from the first IRP 67/40, it is not appropriate to define the INR only in terms of the PT ratio that would have been obtained with the first IRP 67/40. IRP 67/40 had been established as a yardstick to compare the different reagents in terms of ISI which were used in clinical practice. In the WHO calibration model it is assumed that the relation of normals follows the same relation as patients (i.e. coincident lines). In practice, this assumption is not always true. The WHO guidelines indicate that, if the deviation from the model is not greater than 10% in the INR range 2-4.5, the assignment of an ISI is acceptable. Multicenter studies have shown that the deviation from the model does not occur in all laboratories and is not the same in all laboratories. It seems that a deviation from the model depends on the local conditions or the person who performs the manual clotting time determinations. There is indication that the assumption of coincident lines does hold true for the present IRP in most of the calibrating laboratorie

    Theoretical and technological building blocks for an innovation accelerator

    Get PDF
    The scientific system that we use today was devised centuries ago and is inadequate for our current ICT-based society: the peer review system encourages conservatism, journal publications are monolithic and slow, data is often not available to other scientists, and the independent validation of results is limited. Building on the Innovation Accelerator paper by Helbing and Balietti (2011) this paper takes the initial global vision and reviews the theoretical and technological building blocks that can be used for implementing an innovation (in first place: science) accelerator platform driven by re-imagining the science system. The envisioned platform would rest on four pillars: (i) Redesign the incentive scheme to reduce behavior such as conservatism, herding and hyping; (ii) Advance scientific publications by breaking up the monolithic paper unit and introducing other building blocks such as data, tools, experiment workflows, resources; (iii) Use machine readable semantics for publications, debate structures, provenance etc. in order to include the computer as a partner in the scientific process, and (iv) Build an online platform for collaboration, including a network of trust and reputation among the different types of stakeholders in the scientific system: scientists, educators, funding agencies, policy makers, students and industrial innovators among others. Any such improvements to the scientific system must support the entire scientific process (unlike current tools that chop up the scientific process into disconnected pieces), must facilitate and encourage collaboration and interdisciplinarity (again unlike current tools), must facilitate the inclusion of intelligent computing in the scientific process, must facilitate not only the core scientific process, but also accommodate other stakeholders such science policy makers, industrial innovators, and the general public

    Initial data release from the INT Photometric H alpha Survey of the Northern Galactic Plane (IPHAS)

    Get PDF
    The INT/WFC Photometric Hα Survey of the Northern Galactic Plane (IPHAS) is an imaging survey being carried out in Hα, r′ and i′ filters, with the Wide Field Camera (WFC) on the 2.5-m Isaac Newton Telescope (INT) to a depth of r′= 20 (10σ). The survey is aimed at revealing the large scale organization of the Milky Way and can be applied to identifying a range of stellar populations within it. Mapping emission line objects enables a particular focus on objects in the young and old stages of stellar evolution ranging from early T-Tauri stars to late planetary nebulae. In this paper we present the IPHAS Initial Data Release, primarily a photometric catalogue of about 200 million unique objects, coupled with associated image data covering about 1600 deg2 in three passbands. We note how access to the primary data products has been implemented through use of standard virtual observatory publishing interfaces. Simple traditional web access is provided to the main IPHAS photometric catalogue, in addition to a number of common catalogues (such as 2MASS) which are of immediate relevance. Access through the AstroGrid VO Desktop opens up the full range of analysis options, and allows full integration with the wider range of data and services available through the Virtual Observatory. The IDR represents the largest data set published primarily through VO interfaces to date, and so stands as an exemplar of the future of survey data mining. Examples of data access are given, including a cross-matching of IPHAS photometry with sources in the UKIDSS Galactic Plane Survey that validates the existing calibration of the best data

    X-Ray Spectroscopy of Stars

    Full text link
    (abridged) Non-degenerate stars of essentially all spectral classes are soft X-ray sources. Low-mass stars on the cooler part of the main sequence and their pre-main sequence predecessors define the dominant stellar population in the galaxy by number. Their X-ray spectra are reminiscent, in the broadest sense, of X-ray spectra from the solar corona. X-ray emission from cool stars is indeed ascribed to magnetically trapped hot gas analogous to the solar coronal plasma. Coronal structure, its thermal stratification and geometric extent can be interpreted based on various spectral diagnostics. New features have been identified in pre-main sequence stars; some of these may be related to accretion shocks on the stellar surface, fluorescence on circumstellar disks due to X-ray irradiation, or shock heating in stellar outflows. Massive, hot stars clearly dominate the interaction with the galactic interstellar medium: they are the main sources of ionizing radiation, mechanical energy and chemical enrichment in galaxies. High-energy emission permits to probe some of the most important processes at work in these stars, and put constraints on their most peculiar feature: the stellar wind. Here, we review recent advances in our understanding of cool and hot stars through the study of X-ray spectra, in particular high-resolution spectra now available from XMM-Newton and Chandra. We address issues related to coronal structure, flares, the composition of coronal plasma, X-ray production in accretion streams and outflows, X-rays from single OB-type stars, massive binaries, magnetic hot objects and evolved WR stars.Comment: accepted for Astron. Astrophys. Rev., 98 journal pages, 30 figures (partly multiple); some corrections made after proof stag

    All downhill from the PhD? The typical impact trajectory of US academic careers

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00072.Within academia, mature researchers tend to be more senior, but do they also tend to write higher impact articles? This article assesses long-term publishing (16+ years) United States (US) researchers, contrasting them with shorter-term publishing researchers (1, 6 or 10 years). A long-term US researcher is operationalised as having a first Scopus-indexed journal article in exactly 2001 and one in 2016-2019, with US main affiliations in their first and last articles. Researchers publishing in large teams (11+ authors) were excluded. The average field and year normalised citation impact of long- and shorter-term US researchers’ journal articles decreases over time relative to the national average, with especially large falls to the last articles published that may be at least partly due to a decline in self-citations. In many cases researchers start by publishing above US average citation impact research and end by publishing below US average citation impact research. Thus, research managers should not assume that senior researchers will usually write the highest impact papers
    corecore