16,387 research outputs found

    Public Health and Epidemiology Informatics: Recent Research and Trends in the United States

    Get PDF
    Objectives To survey advances in public health and epidemiology informatics over the past three years. Methods We conducted a review of English-language research works conducted in the domain of public health informatics (PHI), and published in MEDLINE between January 2012 and December 2014, where information and communication technology (ICT) was a primary subject, or a main component of the study methodology. Selected articles were synthesized using a thematic analysis using the Essential Services of Public Health as a typology. Results Based on themes that emerged, we organized the advances into a model where applications that support the Essential Services are, in turn, supported by a socio-technical infrastructure that relies on government policies and ethical principles. That infrastructure, in turn, depends upon education and training of the public health workforce, development that creates novel or adapts existing infrastructure, and research that evaluates the success of the infrastructure. Finally, the persistence and growth of infrastructure depends on financial sustainability. Conclusions Public health informatics is a field that is growing in breadth, depth, and complexity. Several Essential Services have benefited from informatics, notably, “Monitor Health,” “Diagnose & Investigate,” and “Evaluate.” Yet many Essential Services still have not yet benefited from advances such as maturing electronic health record systems, interoperability amongst health information systems, analytics for population health management, use of social media among consumers, and educational certification in clinical informatics. There is much work to be done to further advance the science of PHI as well as its impact on public health practice

    A hydrous melting and fractionation model for mid-ocean ridge basalts: Application to the Mid-Atlantic Ridge near the Azores

    Get PDF
    The major element, trace element, and isotopic composition of mid-ocean ridge basalt glasses affected by the Azores hotspot are strongly correlated with H2O content of the glass. Distinguishing the relative importance of source chemistry and potential temperature in ridge-hotspot interaction therefore requires a comprehensive model that accounts for the effect of H2O in the source on melting behavior and for the effect of H2O in primitive liquids on the fractionation path. We develop such a model by coupling the latest version of the MELTS algorithm to a model for partitioning of water among silicate melts and nominally anhydrous minerals. We find that much of the variation in all major oxides except TiO2 and a significant fraction of the crustal thickness anomaly at the Azores platform are explained by the combined effects on melting and fractionation of up to ~700 ppm H2O in the source with only a small thermal anomaly, particularly if there is a small component of buoyantly driven active flow associated with the more H2O-rich melting regimes. An on-axis thermal anomaly of ~35°C in potential temperature explains the full crustal thickness increase of ~4 km approaching the Azores platform, whereas a ≄75°C thermal anomaly would be required in the absence of water or active flow. The polybaric hydrous melting and fractionation model allows us to solve for the TiO2, trace element and isotopic composition of the H2O-rich component in a way that self-consistently accounts for the changes in the melting and fractionation regimes resulting from enrichment, although the presence and concentration in the enriched component of elements more compatible than Dy cannot be resolved

    Generalizing Boolean Satisfiability I: Background and Survey of Existing Work

    Full text link
    This is the first of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper is a survey of the work underlying ZAP, and discusses previous attempts to improve the performance of the Davis-Putnam-Logemann-Loveland algorithm by exploiting the structure of the problem being solved. We examine existing ideas including extensions of the Boolean language to allow cardinality constraints, pseudo-Boolean representations, symmetry, and a limited form of quantification. While this paper is intended as a survey, our research results are contained in the two subsequent articles, with the theoretical structure of ZAP described in the second paper in this series, and ZAP's implementation described in the third

    Generalizing Boolean Satisfiability II: Theory

    Full text link
    This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAPs implementation and presents experimental performance results

    Evidence for GeV emission from the Galactic Center Fountain

    Get PDF
    The region near the Galactic center may have experienced recurrent episodes of injection of energy in excess of ∌\sim 1055^{55} ergs due to repeated starbursts involving more than ∌\sim 104^4 supernovae. This hypothesis can be tested by measurements of Îł\gamma-ray lines produced by the decay of radioactive isotopes and positron annihilation, or by searches for pulsars produced during starbursts. Recent OSSE observations of 511 keV emission extending above the Galactic center led to the suggestion of a starburst driven fountain from the Galactic center. We present EGRET observations that might support this picture.Comment: 5 pages, 1 embedded Postscript figure. To appear in the Proceedings of the Fourth Compton Symposiu

    Generalizing Boolean Satisfiability III: Implementation

    Full text link
    This is the third of three papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal has been to define a representation in which this structure is apparent and can be exploited to improve computational performance. The first paper surveyed existing work that (knowingly or not) exploited problem structure to improve the performance of satisfiability engines, and the second paper showed that this structure could be understood in terms of groups of permutations acting on individual clauses in any particular Boolean theory. We conclude the series by discussing the techniques needed to implement our ideas, and by reporting on their performance on a variety of problem instances

    Combined effects of a parasite, QPX, and the harmful-alga, Prorocentrum minimum on northern quahogs, Mercenaria mercenaria

    Get PDF
    Northern quahogs, Mercenaria mercenaria (L.), frequently are infected with the parasite Quahog Parasite Unknown (QPX, Labyrintohomorpha, Thraustochytriales), which can cause morbidity and mortality of the quahogs. Possible interactions between this parasitic disease and exposure to the harmful dinoflagellate Prorocentrum minimum in M. mercenaria were studied experimentally. Quahogs from Massachusetts with variable intensity of QPX infection were exposed, under controlled laboratory conditions, to cultured P. minimum added to the natural plankton at a cell density equivalent to a natural bloom. After 5 days of exposure, individual clams were diagnosed histologically to assess prevalence and intensity of parasitic infection, as well as other pathological conditions. Further, cellular defense status of clams was evaluated by analyzing hemocyte parameters (morphological and functional) using flow-cytometry. Exposure of quahogs to P. minimum resulted in: a lower percentage of phagocytic hemocytes, higher production of reactive oxygen species (ROS), larger hemocyte size, more-numerous hemocytic aggregates, and increased numbers of hemocytes in gills accompanied by vacuolation and hyperplasia of the water-tubular epithelial cells of the gills. Quahogs had a low prevalence of QPX; by chance, the parasite was present only in quahogs exposed to P. minimum. Thus, the effect of QPX alone on the hemocyte parameters of quahogs could not be assessed in this experiment, but it was possible to assess different responses of infected versus non-infected quahogs to P. minimum. QPX-infected quahogs exposed to P. minimum had repressed percentage of phagocytic hemocytes, consistent with immuno-modulating effect of P. minimum upon several molluscan species, as well as smaller hemocytes and increased hemocyte infiltration throughout the soft tissues. This experiment demonstrates the importance of considering interactive effects of different factors on the immunology and histopathology of bivalve shellfish, and highlights the importance of considering the presence of parasites when bivalves are subjected to harmful-algal blooms

    Geodetic, teleseismic, and strong motion constraints on slip from recent southern Peru subduction zone earthquakes

    Get PDF
    We use seismic and geodetic data both jointly and separately to constrain coseismic slip from the 12 November 1996 M_w 7.7 and 23 June 2001 M_w 8.5 southern Peru subduction zone earthquakes, as well as two large aftershocks following the 2001 earthquake on 26 June and 7 July 2001. We use all available data in our inversions: GPS, interferometric synthetic aperture radar (InSAR) from the ERS-1, ERS-2, JERS, and RADARSAT-1 satellites, and seismic data from teleseismic and strong motion stations. Our two-dimensional slip models derived from only teleseismic body waves from South American subduction zone earthquakes with M_w > 7.5 do not reliably predict available geodetic data. In particular, we find significant differences in the distribution of slip for the 2001 earthquake from models that use only seismic (teleseismic and two strong motion stations) or geodetic (InSAR and GPS) data. The differences might be related to postseismic deformation or, more likely, the different sensitivities of the teleseismic and geodetic data to coseismic rupture properties. The earthquakes studied here follow the pattern of earthquake directivity along the coast of western South America, north of 5°S, earthquakes rupture to the north; south of about 12°S, directivity is southerly; and in between, earthquakes are bilateral. The predicted deformation at the Arequipa GPS station from the seismic-only slip model for the 7 July 2001 aftershock is not consistent with significant preseismic motion

    On Lorentz invariance and supersymmetry of four particle scattering amplitudes in SNR8S^N\R^8 orbifold sigma model

    Get PDF
    The SNR8S^N\R^8 supersymmetric orbifold sigma model is expected to describe the IR limit of the Matrix string theory. In the framework of the model the type IIA string interaction is governed by a vertex which was recently proposed by R.Dijkgraaf, E.Verlinde and H.Verlinde. By using this interaction vertex we derive all four particle scattering amplitudes directly from the orbifold model in the large NN limit.Comment: Latex, 23 page

    Evidence for a Galactic gamma ray halo

    Get PDF
    We present quantitative statistical evidence for a Îł\gamma-ray emission halo surrounding the Galaxy. Maps of the emission are derived. EGRET data were analyzed in a wavelet-based non-parametric hypothesis testing framework, using a model of expected diffuse (Galactic + isotropic) emission as a null hypothesis. The results show a statistically significant large scale halo surrounding the center of the Milky Way as seen from Earth. The halo flux at high latitudes is somewhat smaller than the isotropic gamma-ray flux at the same energy, though of the same order (O(10^(-7)--10^(-6)) ph/cm^2/s/sr above 1 GeV).Comment: Final version accepted for publication in New Astronomy. Some additional results/discussion included, along with entirely revised figures. 19 pages, 15 figures, AASTeX. Better quality figs (PS and JPEG) are available at http://tigre.ucr.edu/halo/paper.htm
    • 

    corecore