143 research outputs found

    Cosmology: a bird's eye view

    Get PDF
    In this essay we discuss the difference in views of the Universe as seen by two different observers. While one of the observers follows a geodesic congruence defined by the geometry of the cosmological model, the other observer follows the fluid flow lines of a perfect fluid with a linear equation of state. We point out that the information these observers collect regarding the state of the Universe can be radically different; while one observes a non-inflating ever-expanding ever-lasting universe, the other observer can experience a dynamical behaviour reminiscent to that of quintessence or even that of a phantom cosmology leading to a 'big rip' singularity within finite time (but without the need for exotic forms of matter).Comment: 5 pages; received an honorable mention in the Gravity Research Foundation Essay Competition, 200

    Addressing the carbon-crime blind spot : a carbon footprint approach

    Get PDF
    Governments estimate the social and economic impacts of crime, but its environmental impact is largely unacknowledged. Our study addresses this by estimating the carbon footprint of crime in England and Wales and identifies the largest sources of emissions. By applying environmentally extended input-output analysis–derived carbon emission factors to the monetized costs of crime, we estimate that crime committed in 2011 in England and Wales gave rise to over 4 million tonnes of carbon dioxide equivalents. Burglary resulted in the largest proportion of the total footprint (30%), because of the carbon associated with replacing stolen/damaged goods. Emissions arising from criminal justice system services also accounted for a large proportion (21% of all offenses; 49% of police recorded offenses). Focus on these offenses and the carbon efficiency of these services may help reduce the overall emissions that result from crime. However, cutting crime does not automatically result in a net reduction in carbon, given that we need to take account of potential rebound effects. As an example, we consider the impact of reducing domestic burglary by 5%. Calculating this is inherently uncertain given that it depends on assumptions concerning how money would be spent in the absence of crime. We find the most likely rebound effect (our medium estimate) is an increase in emissions of 2%. Despite this uncertainty concerning carbon savings, our study goes some way toward informing policy makers of the scale of the environmental consequences of crime and thus enables it to be taken into account in policy appraisals

    Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD.</p> <p>Methods</p> <p>In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set.</p> <p>Results</p> <p>The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.</p> <p>We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods.</p> <p>Conclusions</p> <p>The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.</p

    Collocation analysis for UMLS knowledge-based word sense disambiguation

    Get PDF
    BACKGROUND: The effectiveness of knowledge-based word sense disambiguation (WSD) approaches depends in part on the information available in the reference knowledge resource. Off the shelf, these resources are not optimized for WSD and might lack terms to model the context properly. In addition, they might include noisy terms which contribute to false positives in the disambiguation results. METHODS: We analyzed some collocation types which could improve the performance of knowledge-based disambiguation methods. Collocations are obtained by extracting candidate collocations from MEDLINE and then assigning them to one of the senses of an ambiguous word. We performed this assignment either using semantic group profiles or a knowledge-based disambiguation method. In addition to collocations, we used second-order features from a previously implemented approach.Specifically, we measured the effect of these collocations in two knowledge-based WSD methods. The first method, AEC, uses the knowledge from the UMLS to collect examples from MEDLINE which are used to train a Naïve Bayes approach. The second method, MRD, builds a profile for each candidate sense based on the UMLS and compares the profile to the context of the ambiguous word.We have used two WSD test sets which contain disambiguation cases which are mapped to UMLS concepts. The first one, the NLM WSD set, was developed manually by several domain experts and contains words with high frequency occurrence in MEDLINE. The second one, the MSH WSD set, was developed automatically using the MeSH indexing in MEDLINE. It contains a larger set of words and covers a larger number of UMLS semantic types. RESULTS: The results indicate an improvement after the use of collocations, although the approaches have different performance depending on the data set. In the NLM WSD set, the improvement is larger for the MRD disambiguation method using second-order features. Assignment of collocations to a candidate sense based on UMLS semantic group profiles is more effective in the AEC method.In the MSH WSD set, the increment in performance is modest for all the methods. Collocations combined with the MRD disambiguation method have the best performance. The MRD disambiguation method and second-order features provide an insignificant change in performance. The AEC disambiguation method gives a modest improvement in performance. Assignment of collocations to a candidate sense based on knowledge-based methods has better performance. CONCLUSIONS: Collocations improve the performance of knowledge-based disambiguation methods, although results vary depending on the test set and method used. Generally, the AEC method is sensitive to query drift. Using AEC, just a few selected terms provide a large improvement in disambiguation performance. The MRD method handles noisy terms better but requires a larger set of terms to improve performance

    First lensing measurements of SZ-discovered clusters

    Full text link
    We present the first lensing mass measurements of Sunyaev-Zel'dovich (SZ) selected clusters. Using optical imaging from the Southern Cosmology Survey (SCS), we present weak lensing masses for three clusters selected by their SZ emission in the South Pole Telescope survey (SPT). We confirm that the SZ selection procedure is successful in detecting mass concentrations. We also study the weak lensing signals from 38 optically-selected clusters in ~8 square degrees of the SCS survey. We fit Navarro, Frenk and White (NFW) profiles and find that the SZ clusters have amongst the largest masses, as high as 5x10^14 Msun. Using the best fit masses for all the clusters, we analytically calculate the expected SZ integrated Y parameter, which we find to be consistent with the SPT observations.Comment: Minor changes to match accepted version, 5 pages, 3 figure

    Knowledge-based biomedical word sense disambiguation: comparison of approaches

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Word sense disambiguation (WSD) algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain.</p> <p>Methods</p> <p>We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM.</p> <p>Conclusions</p> <p>We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well compared to the first two methods. In addition, the combination of methods improves the performance over the individual approaches. On the other hand, the performance is still below statistical learning trained on manually produced data and below the maximum frequency sense baseline. Finally, we propose several directions to improve the existing methods and to improve the Metathesaurus to be more effective in WSD.</p

    Treating rheumatoid arthritis to target: recommendations of an international task force

    Get PDF
    Background Aiming at therapeutic targets has reduced the risk of organ failure in many diseases such as diabetes or hypertension. Such targets have not been defined for rheumatoid arthritis (RA). Objective To develop recommendations for achieving optimal therapeutic outcomes in RA. Methods A task force of rheumatologists and a patient developed a set of recommendations on the basis of evidence derived from a systematic literature review and expert opinion; these were subsequently discussed, amended and voted upon by >60 experts from various regions of the world in a Delphi-like procedure. Levels of evidence, strength of recommendations and levels of agreement were derived. Results The treat-to-target activity resulted in 10 recommendations. The treatment aim was defined as remission with low disease activity being an alternative goal in patients with long-standing disease. Regular follow-up (every 1-3 months during active disease) with appropriate therapeutic adaptation to reach the desired state within 3 to a maximum of 6 months was recommended. Follow-up examinations ought to employ composite measures of disease activity which include joint counts. Additional items provide further details for particular aspects of the disease. Levels of agreement were very high for many of these recommendations (>= 9/10). Conclusion The 10 recommendations are supposed to inform patients, rheumatologists and other stakeholders about strategies to reach optimal outcomes of RA based on evidence and expert opinion

    Can burglary prevention be low-carbon and effective? Investigating the environmental performance of burglary prevention measures

    Get PDF
    There has been limited study to date on the environmental impacts of crime prevention measures. We address this shortfall by estimating the carbon footprint associated with the most widely used burglary prevention measures: door locks, window locks, burglar alarms, lighting and CCTV cameras. We compare these footprints with a measure of their effectiveness, the security protection factor, allowing us to identify those measures that are both low-carbon and effective in preventing burglary. Window locks are found to be the most effective and low-carbon measure available individually. Combinations of window locks, door locks, external and indoor lightings are also shown to be effective and low-carbon. Burglar alarms and CCTV do not perform as strongly, with low security against burglary and higher carbon footprints. This information can be used to help inform more sustainable choices of burglary prevention within households as well as for crime prevention product design
    corecore