902 research outputs found

    Pulsational Mapping of Calcium Across the Surface of a White Dwarf

    Get PDF
    We constrain the distribution of calcium across the surface of the white dwarf star G29-38 by combining time series spectroscopy from Gemini-North with global time series photometry from the Whole Earth Telescope. G29-38 is actively accreting metals from a known debris disk. Since the metals sink significantly faster than they mix across the surface, any inhomogeneity in the accretion process will appear as an inhomogeneity of the metals on the surface of the star. We measure the flux amplitudes and the calcium equivalent width amplitudes for two large pulsations excited on G29-38 in 2008. The ratio of these amplitudes best fits a model for polar accretion of calcium and rules out equatorial accretion.Comment: Accepted to the Astrophysical Journal. 16 pages, 10 figures

    Cosmological Solutions in Bimetric Gravity and their Observational Tests

    Full text link
    We obtain the general cosmological evolution equations for a classically consistent theory of bimetric gravity. Their analytic solutions are demonstrated to generically allow for a cosmic evolution starting out from a matter dominated FLRW universe while relaxing towards a de Sitter (anti-de Sitter) phase at late cosmic time. In particular, we examine a subclass of models which contain solutions that are able to reproduce the expansion history of the cosmic concordance model inspite of the nonlinear couplings of the two metrics. This is demonstrated explicitly by fitting these models to observational data from Type Ia supernovae, Cosmic Microwave Background and Baryon Acoustic Oscillations.Comment: Latex, 26 pages. References added and minor revision of introduction and appendix

    Complex fission phenomena

    Get PDF
    Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.Comment: 28 pages, 17 figure

    "Open Innovation" and "Triple Helix" Models of Innovation: Can Synergy in Innovation Systems Be Measured?

    Get PDF
    The model of "Open Innovations" (OI) can be compared with the "Triple Helix of University-Industry-Government Relations" (TH) as attempts to find surplus value in bringing industrial innovation closer to public R&D. Whereas the firm is central in the model of OI, the TH adds multi-centeredness: in addition to firms, universities and (e.g., regional) governments can take leading roles in innovation eco-systems. In addition to the (transversal) technology transfer at each moment of time, one can focus on the dynamics in the feedback loops. Under specifiable conditions, feedback loops can be turned into feedforward ones that drive innovation eco-systems towards self-organization and the auto-catalytic generation of new options. The generation of options can be more important than historical realizations ("best practices") for the longer-term viability of knowledge-based innovation systems. A system without sufficient options, for example, is locked-in. The generation of redundancy -- the Triple Helix indicator -- can be used as a measure of unrealized but technologically feasible options given a historical configuration. Different coordination mechanisms (markets, policies, knowledge) provide different perspectives on the same information and thus generate redundancy. Increased redundancy not only stimulates innovation in an eco-system by reducing the prevailing uncertainty; it also enhances the synergy in and innovativeness of an innovation system.Comment: Journal of Open Innovations: Technology, Market and Complexity, 2(1) (2016) 1-12; doi:10.1186/s40852-016-0039-

    A pilot Internet "Value of Health" Panel: recruitment, participation and compliance

    Get PDF
    Objectives To pilot using a panel of members of the public to provide preference data via the Internet Methods A stratified random sample of members of the general public was recruited and familiarised with the standard gamble procedure using an Internet based tool. Health states were perdiodically presented in "sets" corresponding to different conditions, during the study. The following were described: Recruitment (proportion of people approached who were trained); Participation (a) the proportion of people trained who provided any preferences and (b) the proportion of panel members who contributed to each "set" of values; and Compliance (the proportion, per participant, of preference tasks which were completed). The influence of covariates on these outcomes was investigated using univariate and multivariate analyses. Results A panel of 112 people was recruited. 23% of those approached (n = 5,320) responded to the invitation, and 24% of respondents (n = 1,215) were willing to participate (net = 5.5%). However, eventual recruitment rates, following training, were low (2.1% of those approached). Recruitment from areas of high socioeconomic deprivation and among ethnic minority communities was low. Eighteen sets of health state descriptions were considered over 14 months. 74% of panel members carried out at least one valuation task. People from areas of higher socioeconomic deprivation and unmarried people were less likely to participate. An average of 41% of panel members expressed preferences on each set of descriptions. Compliance ranged from 3% to 100%. Conclusion It is feasible to establish a panel of members of the general public to express preferences on a wide range of health state descriptions using the Internet, although differential recruitment and attrition are important challenges. Particular attention to recruitment and retention in areas of high socioeconomic deprivation and among ethnic minority communities is necessary. Nevertheless, the panel approach to preference measurement using the Internet offers the potential to provide specific utility data in a responsive manner for use in economic evaluations and to address some of the outstanding methodological uncertainties in this field

    "Meaning" as a sociological concept: A review of the modeling, mapping, and simulation of the communication of knowledge and meaning

    Full text link
    The development of discursive knowledge presumes the communication of meaning as analytically different from the communication of information. Knowledge can then be considered as a meaning which makes a difference. Whereas the communication of information is studied in the information sciences and scientometrics, the communication of meaning has been central to Luhmann's attempts to make the theory of autopoiesis relevant for sociology. Analytical techniques such as semantic maps and the simulation of anticipatory systems enable us to operationalize the distinctions which Luhmann proposed as relevant to the elaboration of Husserl's "horizons of meaning" in empirical research: interactions among communications, the organization of meaning in instantiations, and the self-organization of interhuman communication in terms of symbolically generalized media such as truth, love, and power. Horizons of meaning, however, remain uncertain orders of expectations, and one should caution against reification from the meta-biological perspective of systems theory

    Effectiveness of Denitrifying Bioreactors on Water Pollutant Reduction from Agricultural Areas

    Get PDF
    HighlightsDenitrifying woodchip bioreactors treat nitrate-N in a variety of applications and geographies.This review focuses on subsurface drainage bioreactors and bed-style designs (including in-ditch).Monitoring and reporting recommendations are provided to advance bioreactor science and engineering. Denitrifying bioreactors enhance the natural process of denitrification in a practical way to treat nitrate-nitrogen (N) in a variety of N-laden water matrices. The design and construction of bioreactors for treatment of subsurface drainage in the U.S. is guided by USDA-NRCS Conservation Practice Standard 605. This review consolidates the state of the science for denitrifying bioreactors using case studies from across the globe with an emphasis on full-size bioreactor nitrate-N removal and cost-effectiveness. The focus is on bed-style bioreactors (including in-ditch modifications), although there is mention of denitrifying walls, which broaden the applicability of bioreactor technology in some areas. Subsurface drainage denitrifying bioreactors have been assessed as removing 20% to 40% of annual nitrate-N loss in the Midwest, and an evaluation across the peer-reviewed literature published over the past three years showed that bioreactors around the world have been generally consistent with that (N load reduction median: 46%; mean ±SD: 40% ±26%; n = 15). Reported N removal rates were on the order of 5.1 g N m-3 d-1 (median; mean ±SD: 7.2 ±9.6 g N m-3 d-1; n = 27). Subsurface drainage bioreactor installation costs have ranged from less than 5,000to5,000 to 27,000, with estimated cost efficiencies ranging from less than 2.50kg1Nyear1toroughly2.50 kg-1 N year-1 to roughly 20 kg-1 N year-1 (although they can be as high as $48 kg-1 N year-1). A suggested monitoring setup is described primarily for the context of conservation practitioners and watershed groups for assessing annual nitrate-N load removal performance of subsurface drainage denitrifying bioreactors. Recommended minimum reporting measures for assessing and comparing annual N removal performance include: bioreactor dimensions and installation date; fill media size, porosity, and type; nitrate-N concentrations and water temperatures; bioreactor flow treatment details; basic drainage system and bioreactor design characteristics; and N removal rate and efficiency

    The Impact of Railway Stations on Residential and Commercial Property Value: A Meta-analysis

    Get PDF
    Railway stations function as nodes in transport networks and places in an urban environment. They have accessibility and environmental impacts, which contribute to property value. The literature on the effects of railway stations on property value is mixed in its finding in respect to the impact magnitude and direction, ranging from a negative to an insignificant or a positive impact. This paper attempts to explain the variation in the findings by meta-analytical procedures. Generally the variations are attributed to the nature of data, particular spatial characteristics, temporal effects and methodology. Railway station proximity is addressed from two spatial considerations: a local station effect measuring the effect for properties with in 1/4 mile range and a global station effect measuring the effect of coming 250 m closer to the station. We find that the effect of railway stations on commercial property value mainly takes place at short distances. Commercial properties within 1/4 mile rang are 12.2% more expensive than residential properties. Where the price gap between the railway station zone and the rest is about 4.2% for the average residence, it is about 16.4% for the average commercial property. At longer distances the effect on residential property values dominate. We find that for every 250 m a residence is located closer to a station its price is 2.3% higher than commercial properties. Commuter railway stations have a consistently higher positive impact on the property value compared to light and heavy railway/Metro stations. The inclusion of other accessibility variables (such as highways) in the models reduces the level of reported railway station impact. © 2007 Springer Science+Business Media, LLC
    corecore