677 research outputs found

    Mapping the Galactic Halo I. The `Spaghetti' Survey

    Get PDF
    We describe a major survey of the Milky Way halo designed to test for kinematic substructure caused by destruction of accreted satellites. We use the Washington photometric system to identify halo stars efficiently for spectroscopic followup. Tracers include halo giants (detectable out to more than 100 kpc), blue horizontal branch stars, halo stars near the main sequence turnoff, and the ``blue metal-poor stars'' of Preston et al (1994). We demonstrate the success of our survey by showing spectra of stars we have identified in all these categories, including giants as distant as 75 kpc. We discuss the problem of identifying the most distant halo giants. In particular, extremely metal-poor halo K dwarfs are present in approximately equal numbers to the distant giants for V fainter than 18, and we show that our method will distinguish reliably between these two groups of metal-poor stars. We plan to survey 100 square degrees at high galactic latitude, and expect to increase the numbers of known halo giants, BHB stars and turnoff stars by more than an order of magnitude. In addition to the strong test that this large sample will provide for the question `was the Milky Way halo accreted from satellite galaxies?', we will improve the accuracy of mass measurements of the Milky Way beyond 50 kpc via the kinematics of the many distant giants and BHB stars we will find. We show that one of our first datasets constrains the halo density law over galactocentric radii of 5-20 kpc and z heights of 2-15 kpc. The data support a flattened power-law halo with b/a of 0.6 and exponent -3.0. More complex models with a varying axial ratio may be needed with a larger dataset.Comment: 55 pages, 22 figures, to appear in the Astronomical Journa

    Applying refinement to the use of mice and rats in rheumatoid arthritis research

    Get PDF
    Rheumatoid arthritis (RA) is a painful, chronic disorder and there is currently an unmet need for effective therapies that will benefit a wide range of patients. The research and development process for therapies and treatments currently involves in vivo studies, which have the potential to cause discomfort, pain or distress. This Working Group report focuses on identifying causes of suffering within commonly used mouse and rat ‘models’ of RA, describing practical refinements to help reduce suffering and improve welfare without compromising the scientific objectives. The report also discusses other, relevant topics including identifying and minimising sources of variation within in vivo RA studies, the potential to provide pain relief including analgesia, welfare assessment, humane endpoints, reporting standards and the potential to replace animals in RA research

    A New Class of Safe Oligosaccharide Polymer Therapy To Modify the Mucus Barrier of Chronic Respiratory Disease

    Get PDF
    The host- and bacteria-derived extracellular polysaccharide coating of the lung is a considerable challenge in chronic respiratory disease and is a powerful barrier to effective drug delivery. A low molecular weight 12–15-mer alginate oligosaccharide (OligoG CF-5/20), derived from plant biopolymers, was shown to modulate the polyanionic components of this coating. Molecular modeling and Fourier transform infrared spectroscopy demonstrated binding between OligoG CF-5/20 and respiratory mucins. Ex vivo studies showed binding induced alterations in mucin surface charge and porosity of the three-dimensional mucin networks in cystic fibrosis (CF) sputum. Studies in Humans showed that OligoG CF-5/20 is safe for inhalation in CF patients with effective lung deposition and modifies the viscoelasticity of CF-sputum. OligoG CF-5/20 is the first inhaled polymer therapy, represents a novel mechanism of action and therapeutic approach for the treatment of chronic respiratory disease, and is currently in Phase IIb clinical trials for the treatment of CF

    The inevitable QSAR renaissance

    Get PDF
    QSAR approaches, including recent advances in 3D-QSAR, are advantageous during the lead optimization phase of drug discovery and complementary with bioinformatics and growing data accessibility. Hints for future QSAR practitioners are also offered

    A 'Different Class'? Homophily and Heterophily in the Social Class Networks of Britpop

    Get PDF
    Social network analysis is increasingly recognised as a useful way to explore music scenes. In this article we examine the individuals who were the cultural workforce that comprised the 'Britpop' music scene of the 1990s. The focus of our analysis is homophily and heterophily to determine whether the clusters of friendships and working relationships of those who were ‘best connected’ in the scene were patterned by original social class position. We find that Britpop's 'whole network' is heterophilic but its 'sub-networks' are more likely to be social class homophilic. The sub-networks that remain heterophilic are likely to be united by other common experiences that brought individuals in the network to the same social spaces. We suggest that our findings on Britpop might be generalised to the composition of other music scenes, cultural workforces and aggregations of young people. Our study differs from research on, first, British ‘indie music’ and social class which focusses upon the construction, representation and performance of social location rather than the relationships it might shape (such as Wiseman-Trowse, 2008) and second, the pioneering social network analyses of music scenes (such as Crossley 2008; 2009; 2015; Crossley et. al 2014) which currently lacks the explicit emphasis on social class

    Acoustic Correlates of Information Structure.

    Get PDF
    This paper reports three studies aimed at addressing three questions about the acoustic correlates of information structure in English: (1) do speakers mark information structure prosodically, and, to the extent they do; (2) what are the acoustic features associated with different aspects of information structure; and (3) how well can listeners retrieve this information from the signal? The information structure of subject-verb-object sentences was manipulated via the questions preceding those sentences: elements in the target sentences were either focused (i.e., the answer to a wh-question) or given (i.e., mentioned in prior discourse); furthermore, focused elements had either an implicit or an explicit contrast set in the discourse; finally, either only the object was focused (narrow object focus) or the entire event was focused (wide focus). The results across all three experiments demonstrated that people reliably mark (1) focus location (subject, verb, or object) using greater intensity, longer duration, and higher mean and maximum F0, and (2) focus breadth, such that narrow object focus is marked with greater intensity, longer duration, and higher mean and maximum F0 on the object than wide focus. Furthermore, when participants are made aware of prosodic ambiguity present across different information structures, they reliably mark focus type, so that contrastively focused elements are produced with greater intensity, longer duration, and lower mean and maximum F0 than noncontrastively focused elements. In addition to having important theoretical consequences for accounts of semantics and prosody, these experiments demonstrate that linear residualisation successfully removes individual differences in people's productions thereby revealing cross-speaker generalisations. Furthermore, discriminant modelling allows us to objectively determine the acoustic features that underlie meaning differences

    How to do an evaluation: pitfalls and traps

    Get PDF
    The recent literature is replete with papers evaluating computational tools (often those operating on 3D structures) for their performance in a certain set of tasks. Most commonly these papers compare a number of docking tools for their performance in cognate re-docking (pose prediction) and/or virtual screening. Related papers have been published on ligand-based tools: pose prediction by conformer generators and virtual screening using a variety of ligand-based approaches. The reliability of these comparisons is critically affected by a number of factors usually ignored by the authors, including bias in the datasets used in virtual screening, the metrics used to assess performance in virtual screening and pose prediction and errors in crystal structures used

    How to do an evaluation: pitfalls and traps

    Get PDF
    The recent literature is replete with papers evaluating computational tools (often those operating on 3D structures) for their performance in a certain set of tasks. Most commonly these papers compare a number of docking tools for their performance in cognate re-docking (pose prediction) and/or virtual screening. Related papers have been published on ligand-based tools: pose prediction by conformer generators and virtual screening using a variety of ligand-based approaches. The reliability of these comparisons is critically affected by a number of factors usually ignored by the authors, including bias in the datasets used in virtual screening, the metrics used to assess performance in virtual screening and pose prediction and errors in crystal structures used
    • …
    corecore