684,448 research outputs found

    The 1990 annual statistics and highlights report

    Get PDF
    The National Space Science Data Center (NSSDC) has archived over 6 terabytes of space and Earth science data accumulated over nearly 25 years. It now expects these holdings to nearly double every two years. The science user community needs rapid access to this archival data and information about data. The NSSDC has been set on course to provide just that. Five years ago the NSSDC came on line, becoming easily reachable for thousands of scientists around the world through electronic networks it managed and other international electronic networks to which it connected. Since that time, the data center has developed and implemented over 15 interactive systems, operational nearly 24 hours per day, and is reachable through DECnet, TCP/IP, X25, and BITnet communication protocols. The NSSDC is a clearinghouse for the science user to find data needed through the Master Directory system whether it is at the NSSDC or deposited in over 50 other archives and data management facilities around the world. Over 13,000 users accessed the NSSDC electronic systems, during the past year. Thousands of requests for data have been satisfied, resulting in the NSSDC's sending out a volume of data last year that nearly exceeded a quarter of its holdings. This document reports on some of the highlights and distribution statistics for most of the basic NSSDC operational services for fiscal year 1990. It is intended to be the first of a series of annual reports on how well NSSDC is doing in supporting the space and Earth science user communities

    Deer Herd Management Using the Internet: A Comparative Study of California Targeted By Data Mining the Internet

    Get PDF
    An ongoing project to investigate the use of the internet as an information source for decision support identified the decline of the California deer population as a significant issue. Using Google Alerts, an automated keyword search tool, text and numerical data were collected from a daily internet search and categorized by region and topic to allow for identification of information trends. This simple data mining approach determined that California is one of only four states that do not currently report total, finalized deer harvest (kill) data online and that it is the only state that has reduced the amount of information made available over the internet in recent years. Contradictory information identified by the internet data mining prompted the analysis described in this paper indicating that the graphical information presented on the California Fish and Wildlife website significantly understates the severity of the deer population decline over the past 50 years. This paper presents a survey of how states use the internet in their deer management programs and an estimate of the California deer population over the last 100 years. It demonstrates how any organization can use the internet for data collection and discovery

    Variational Principle underlying Scale Invariant Social Systems

    Get PDF
    MaxEnt's variational principle, in conjunction with Shannon's logarithmic information measure, yields only exponential functional forms in straightforward fashion. In this communication we show how to overcome this limitation via the incorporation, into the variational process, of suitable dynamical information. As a consequence, we are able to formulate a somewhat generalized Shannonian Maximum Entropy approach which provides a unifying "thermodynamic-like" explanation for the scale-invariant phenomena observed in social contexts, as city-population distributions. We confirm the MaxEnt predictions by means of numerical experiments with random walkers, and compare them with some empirical data

    Global burden of human brucellosis : a systematic review of disease frequency

    Get PDF
    BACKGROUND: This report presents a systematic review of scientific literature published between 1990-2010 relating to the frequency of human brucellosis, commissioned by WHO. The objectives were to identify high quality disease incidence data to complement existing knowledge of the global disease burden and, ultimately, to contribute towards the calculation of a Disability-Adjusted Life Years (DALY) estimate for brucellosis.METHODS/PRINCIPAL FINDINGS: Thirty three databases were searched, identifying 2,385 articles relating to human brucellosis. Based on strict screening criteria, 60 studies were selected for quality assessment, of which only 29 were of sufficient quality for data analysis. Data were only available from 15 countries in the regions of Northern Africa and Middle East, Western Europe, Central and South America, Sub-Saharan Africa, and Central Asia. Half of the studies presented incidence data, six of which were longitudinal prospective studies, and half presented seroprevalence data which were converted to incidence rates. Brucellosis incidence varied widely between, and within, countries. Although study biases cannot be ruled out, demographic, occupational, and socioeconomic factors likely play a role. Aggregated data at national or regional levels do not capture these complexities of disease dynamics and, consequently, at-risk populations or areas may be overlooked. In many brucellosis-endemic countries, health systems are weak and passively-acquired official data underestimate the true disease burden.CONCLUSIONS: High quality research is essential for an accurate assessment of disease burden, particularly in Eastern Europe, the Asia-Pacific, Central and South America and Africa where data are lacking. Providing formal epidemiological and statistical training to researchers is essential for improving study quality. An integrated approach to disease surveillance involving both human health and veterinary services would allow a better understand of disease dynamics at the animal-human interface, as well as a more cost-effective utilisation of resources

    A model for hypermedia learning environments based on electronic books

    Get PDF
    Designers of hypermedia learning environments could take advantage of a theoretical scheme which takes into account various kinds of learning activities and solves some of the problems associated with them. In this paper, we present a model which inherits a number of characteristics from hypermedia and electronic books. It can provide designers with the tools for creating hypermedia learning systems, by allowing the elements and functions involved in the definition of a specific application to be formally represented A practical example, CESAR, a hypermedia learning environment for hearing‐impaired children, is presented, and some conclusions derived from the use of the model are also shown
    corecore