609 research outputs found

    Predictability study on the aftershock sequence following the 2011 Tohoku-Oki, Japan, earthquake: first results

    Get PDF
    Although no deterministic and reliable earthquake precursor is known to date, we are steadily gaining insight into probabilistic forecasting that draws on space–time characteristics of earthquake clustering. Clustering-based models aiming to forecast earthquakes within the next 24 hours are under test in the global project ‘Collaboratory for the Study of Earthquake Predictability’ (CSEP). The 2011 March 11 magnitude 9.0 Tohoku-Oki earthquake in Japan provides a unique opportunity to test the existing 1-day CSEP models against its unprecedentedly active aftershock sequence. The original CSEP experiment performs tests after the catalogue is finalized to avoid bias due to poor data quality. However, this study differs from this tradition and uses the preliminary catalogue revised and updated by the Japan Meteorological Agency (JMA), which is often incomplete but is immediately available. This study is intended as a first step towards operability-oriented earthquake forecasting in Japan. Encouragingly, at least one model passed the test in most combinations of the target day and the testing method, although the models could not take account of the megaquake in advance and the catalogue used for forecast generation was incomplete. However, it can also be seen that all models have only limited forecasting power for the period immediately after the quake. Our conclusion does not change when the preliminary JMAcatalogue is replaced by the finalized one, implying that the models perform stably over the catalogue replacement and are applicable to operational earthquake forecasting. However, we emphasize the need of further research on model improvement to assure the reliability of forecasts for the days immediately after the main quake. Seismicity is expected to remain high in all parts of Japan over the coming years. Our results present a way to answer the urgent need to promote research on time-dependent earthquake predictability to prepare for subsequent large earthquakes in the near future in Japan.Published653-6583.1. Fisica dei terremotiJCR Journalrestricte

    Normalizing biomedical terms by minimizing ambiguity and variability

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>One of the difficulties in mapping biomedical named entities, e.g. genes, proteins, chemicals and diseases, to their concept identifiers stems from the potential variability of the terms. Soft string matching is a possible solution to the problem, but its inherent heavy computational cost discourages its use when the dictionaries are large or when real time processing is required. A less computationally demanding approach is to normalize the terms by using heuristic rules, which enables us to look up a dictionary in a constant time regardless of its size. The development of good heuristic rules, however, requires extensive knowledge of the terminology in question and thus is the bottleneck of the normalization approach.</p> <p>Results</p> <p>We present a novel framework for discovering a list of normalization rules from a dictionary in a fully automated manner. The rules are discovered in such a way that they minimize the ambiguity and variability of the terms in the dictionary. We evaluated our algorithm using two large dictionaries: a human gene/protein name dictionary built from BioThesaurus and a disease name dictionary built from UMLS.</p> <p>Conclusions</p> <p>The experimental results showed that automatically discovered rules can perform comparably to carefully crafted heuristic rules in term mapping tasks, and the computational overhead of rule application is small enough that a very fast implementation is possible. This work will help improve the performance of term-concept mapping tasks in biomedical information extraction especially when good normalization heuristics for the target terminology are not fully known.</p

    Variation of radiocesium concentrations in cedar pollen in the Okutama area since the Fukushima Daiichi Nuclear Power Plant Accident

    Get PDF
    Due to releases of radionuclides in the Fukushima Daiichi Nuclear Power Plant Accident, radiocesium (¹³⁴Cs and ¹³⁷Cs) has been incorporated into large varieties of plant species and soil types. There is a possibility that radiocesium taken into plants is being diffused by pollen. Radiocesium concentrations in cedar pollen have been measured in Ome City, located in the Okutama area of metropolitan Tokyo, for the past 3 years. In this research, the variation of radiocesium concentrations was analysed by comparing data from 2011 to 2014. Air dose rates at 1 m above the ground surface in Ome City from 2011 to 2014 showed no significant difference. Concentration of ¹³⁷Cs contained in the cedar pollen in 2012 was about half that in 2011. Between 2012 and 2014, the concentration decreased by approximately one fifth, which was similar to the result of a press release distributed by the Japanese Ministry of Agriculture, Forestry and Fisheries

    Text Mining the History of Medicine

    Get PDF
    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform

    Top-k String Auto-Completion with Synonyms

    Get PDF
    Auto-completion is one of the most prominent features of modern information systems. The existing solutions of auto-completion provide the suggestions based on the beginning of the currently input character sequence (i.e. prefix). However, in many real applications, one entity often has synonyms or abbreviations. For example, "DBMS" is an abbreviation of "Database Management Systems". In this paper, we study a novel type of auto-completion by using synonyms and abbreviations. We propose three trie-based algorithms to solve the top-k auto-completion with synonyms; each one with different space and time complexity trade-offs. Experiments on large-scale datasets show that it is possible to support effective and efficient synonym-based retrieval of completions of a million strings with thousands of synonyms rules at about a microsecond per-completion, while taking small space overhead (i.e. 160-200 bytes per string).Peer reviewe

    Net Charge Fluctuations in Au + Au Interactions at sqrt(s_NN) = 130 GeV

    Full text link
    Data from Au + Au interactions at sqrt(s_NN) = 130 GeV, obtained with the PHENIX detector at RHIC, are used to investigate local net charge fluctuations among particles produced near mid-rapidity. According to recent suggestions, such fluctuations may carry information from the Quark Gluon Plasma. This analysis shows that the fluctuations are dominated by a stochastic distribution of particles, but are also sensitive to other effects, like global charge conservation and resonance decays.Comment: 6 pages, RevTeX 3, 3 figures, 307 authors, submitted to Phys. Rev. Lett. on 21 March, 2002. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (will be made) publicly available at http://www.phenix.bnl.gov/phenix/WWW/run/phenix/papers.htm
    corecore