8,470 research outputs found

    Red Runaways II: Low mass Hills stars in SDSS Stripe 82

    Full text link
    Stars ejected from the Galactic centre can be used to place important constraints on the Milky Way potential. Since existing hypervelocity stars are too distant to accurately determine orbits, we have conducted a search for nearby candidates using full three-dimensional velocities. Since the efficacy of such studies are often hampered by deficiencies in proper motion catalogs, we have chosen to utilize the reliable, high-precision SDSS Stripe 82 proper motion catalog. Although we do not find any candidates which have velocities in excess of the escape speed, we identify 226 stars on orbits that are consistent with Galactic centre ejection. This number is significantly larger than what we would expect for halo stars on radial orbits and cannot be explained by disk or bulge contamination. If we restrict ourselves to metal-rich stars, we find 29 candidates with [Fe/H] > -0.8 dex and 10 with [Fe/H] > -0.6 dex. Their metallicities are more consistent with what we expect for bulge ejecta, and so we believe these candidates are especially deserving of further study. We have supplemented this sample using our own radial velocities, developing an algorithm to use proper motions for optimizing candidate selection. This technique provides considerable improvement on the blind spectroscopic sample of SDSS, being able to identify candidates with an efficiency around 20 times better than a blind search.Comment: 13 pages, accepted for publication in Ap

    Nearby Low-Mass Hypervelocity Stars

    Full text link
    Hypervelocity stars are those that have speeds exceeding the escape speed and are hence unbound from the Milky Way. We investigate a sample of low-mass hypervelocity candidates obtained using data from the high-precision SDSS Stripe 82 catalogue, which we have combined with spectroscopy from the 200-inch Hale Telescope at Palomar Observatory. We find four good candidates, but without metallicities it is difficult to pin-down their distances and therefore total velocities. Our best candidate has a significant likelihood that it is escaping the Milky Way for a wide-range of metallicities.Comment: 5 pages; Contribution to proceedings for "The Milky Way Unravelled by Gaia" conference, Barcelona, Dec 201

    Exploring Beyond Earth's Atmosphere with Human-Machine Teams

    Get PDF
    NASA's highly successful Kepler Mission has revolutionized our understanding of the Galaxy. We now know that planets, even Earth-size planets in the habitable zone, are common. With the end of the Kepler Mission we now look to the future with the Transiting Exoplanet Survey Satellite (TESS) which will discover thousands of exoplanets in orbit around the brightest stars in the sky. In a two-year survey, TESS will perform an all-sky search of more than 200,000 stars for temporary drops in brightness caused by planetary transits. With Kepler and TESS, humanity is finally at the verge of studying the masses, sizes, densities, orbits, and atmospheres of a large cohort of small planets, including a sample of rocky worlds in the habitable zones of their host stars which may prove to host life. The massive data sets generated by Kepler and TESS must be meticulously combed for the weakest planetary signals every month. While a daunting and error-prone task for humans, this is an exciting opportunity for the breakthroughs recently seen in machine learning. Specifically, traditional methods for identifying planet transits require extensive data processing pipelines followed by extensive human vetting. This manual process risks loss of information due to the data processing and to inconsistency and biases due to individual human vetters. The latest advancements in machine learning will allow an objective classifier to minimize the losses of information and greatly lessen the burden on the human vetters, in addition to providing assessment of quality and score to each planet candidate, freeing the humans to concentrate on border cases and other more interesting investigations

    Simulation of factors impeding water quality trading market performance

    Get PDF
    Over the past several decades, market-based approaches to natural resource management have received increased attention as a means to cost-effectively achieve environmental quality goals. Following on what has been hailed a success for reducing air pollution, water quality trading (WQT) has more recently been seen as the next great opportunity for reducing water pollution, especially for nutrient loading. Numerous trading programs have been pilot tested and/or adopted in states throughout the nation, with more than 70 programs now in operation (Breetz et al., 2004). WQT would allow multiple contributors to surface water degradation to determine how best to meet an overarching collective goal related to pollution reduction. WQT takes advantage of differences in pollution abatement costs. In the case of point/nonpoint source trading, such as between wastewater treatment plants (WWTPs) and agricultural producers, it is often the agricultural producers who can achieve a given level of nutrient reduction at less cost through their adoption of various best management practices that reduce sedimentation and nutrient loading to surface waters. Trading would allow WWTPs to purchase “credits” generated by producers who reduced their pollution loading to achieve an equivalent level of reduction as might be required by a regulatory discharge permit at a lower overall cost. While there is substantial evidence that nonpoint sources have lower nutrient reduction costs than point sources, experience with WQT reveals a common theme: little or no trading activity. The success of WQT seems, in part, to depend on the structure of the market created to bring buyers and sellers together to transact exchanges. These outcomes suggest the presence of obstacles to trading that were not recognized in the design of existing programs. To examine the ways that various market imperfections may impact the performance of a WQT market, an agent-based model was constructed which simulated a hypothetical point-nonpoint market. In particular, the market was modeled using a variant of the sequential, bilateral trading algorithm proposed by Atkinson and Tietenberg (1991). Our proposed paper first presents an overview of the simulation modeling technique and then analyzes the effects of two prominent market impediments identified in the WQT literature: information levels and trading ratios. Information levels refer to buyers’ and sellers’ knowledge of each others’ bid prices. A frictionless WQT market would be one where all of the potential buyers (i.e., point sources) would know all of the sellers’ (i.e., nonpoint sources) offer prices and vice versa. In this full information environment, we can expect that trades would be consummated in the order of their gains. That is, first buyers and sellers to be paired together for trading would be the buyers with the highest offer prices and the sellers with the lowest bid prices. Successive trades will have successively smaller gains until the gap between bid and offer prices reaches zero. This is the textbook Walrasian market and would closely approximate a double auction institution, where all buyers and sellers submit their offers and bids, which are then sorted and matched by a centralized market manager. While the full information scenario serves as a useful benchmark, most existing WQT markets are decentralized in nature, so that limited information causes traders to be matched in a less efficient sequence. A variety of information levels are possible. One side of the market may have more information than the other (limited information) or neither side having any knowledge of the other side’s bid or offer prices (low information). Each of these scenarios leads to a different sequencing of trades. This paper analyzes the effect of different information levels on market performance. Market performance is measured in terms of cost savings, the number of credits traded, and the average reduction costs under different information scenarios. Trading ratios are a common component of many existing WQT programs. A typical trading ratio of 2:1 requires a nonpoint source to reduce two pounds of expected nutrient loading in order to receive one pound of trading credit. These ratios serve as a “safety factor” and are incorporated to account for the uncertainty in the measurement and monitoring of nonpoint source loading. Because nonpoint traders must reduce loading by 2 pounds for every 1 pound emitted by point source traders, there will be a net reduction of 1 pound of expected loading for each trade. So, while inhibiting some trades from ever occurring, trading ratios also have the potential to improve water quality beyond trading with a 1:1 trading ratio. This paper examines these tradeoffs in terms of effects on market performance and then describes procedures that can be used to characterize an optimal trading ratio if one exists. Because WQT programs, by nature, involve complex interactions between economics and the biophysical world, accurately simulating a real-world WQT market requires at minimum a basic understanding of the types of data that watershed models can provide. This paper concludes by briefly discussing data requirements, points of consideration, and integrative techniques used in the simulation of WQT in real-world watersheds.water quality trading, market based, trading ratio, information levels, point source, nonpoint source, simulation, Environmental Economics and Policy, Resource /Energy Economics and Policy,

    Identification of a circular intermediate in the transfer and transposition of Tn4555, a mobilizable transposon from Bacteroides spp.

    Get PDF
    Transmissible cefoxitin (FX) resistance in Bacteroides vulgatus CLA341 was associated with the 12.5-kb, mobilizable transposon, Tn4555, which encoded the 13-lactamase gene cfxA. Transfer occurred by a conjugation-like mechanism, was stimulated by growth of donor cells with tetracycline (TC), and required the presence of a Bacteroides chromosomal Tcr element. Transconijugants resistant to either FX, TC, or both drugs were obtained, but only Fx Tcr isolates could act as donors of Fxr in subsequent matings. Transfer of Fxr could be restored in FxF Tc' strains by the introduction of a conjugal Tcr element from Bacteroidesfragilis V479-1. A covalently closed circular DNA form of Tn4555 was observed in donor cells by Southern hybridization, and the levels of this circular transposon increased significantly in cells grown with TC. Both the cfxA gene and the Tn4555 mobilization region hybridized to the circular DNA, suggesting that this was a structurally intact transposon unit. Circular transposon DNA purified by CsCI-ethidium bromide density gradient centrifugation was used to transform Tcs B. fragilis 638, and FXr transformants were obtained. Both the circular form and the integrated Tn4555 were observed in transformants, but the circular form was present at less than one copy per chromosomal equivalent. Examination of genomic DNA from Fxr transformants and transconjugants revealed that Tn4555 could insert at a wide variety of chromosomal sites. Multiple transposon insertions were present in many of the transconjugants, indicating that there was no specific barrier to the introduction of a second transposon copy. Originally published Journal of Bacteriology, Vol. 175, No. 9, May 199

    Biochemical and genetic analyses of a catalase from the anaerobic bacterium Bacteroides fragilis.

    Get PDF
    A single catalase enzyme was produced by the anaerobic bacterium Bacteroides fragilis when cultures at late log phase were shifted to aerobic conditions. In anaerobic conditions, catalase activity was detected in stationary-phase cultures, indicating that not only oxygen exposure but also starvation may affect the production of this antioxidant enzyme. The purified enzyme showed a peroxidatic activity when pyrogallol was used as an electron donor. It is a hemoprotein containing one heme molecule per holomer and has an estimated molecular weight of 124,000 to 130,000. The catalase gene was cloned by screening a B. fragilis library for complementation of catalase activity in an Escherichia coli catalase mutant (katE katG) strain. The cloned gene, designated katB, encoded a catalase enzyme with electrophoretic mobility identical to that of the purified protein from the B. fragilis parental strain. The nucleotide sequence of katB revealed a 1,461-bp open reading frame for a protein with 486 amino acids and a predicted molecular weight of 55,905. This result was very close to the 60,000 Da determined by denaturing sodium dodecyl sulfate-polyacrylamide gel electrophoresis of the purified catalase and indicates that the native enzyme is composed of two identical subunits. The N-terminal amino acid sequence of the purified catalase obtained by Edman degradation confirmed that it is a product of katB. The amino acid sequence of KatB showed high similarity to Haemophilus influenzae HktE (71.6% identity, 66% nucleotide identity), as well as to gram-positive bacterial and mammalian catalases. No similarities to bacterial catalase-peroxidase-type enzymes were found. The active-site residues, proximal and distal hemebinding ligands, and NADPH-binding residues of the bovine liver catalase-type enzyme were highly conserved in B. fragilis KatB. Originally published Journal of Bacteriology, Vol. 117, No. 11, June 199

    Analysis of Economic Depreciation for Multi-Family Property

    Get PDF
    This paper uses a hedonic pricing model and National Council of Real Estate Investment Fiduciaries data to estimate economic depreciation for multi-family real estate. The findings indicate that investment grade multi-family housing depreciates approximately 2.7% per year in real terms based on total property value. This implies a depreciation rate for just the building of about 3.25% per year. With 2% inflation, this suggests a nominal depreciation rate of about 5.25% per year. Converted into a straight-line depreciation rate that has the same present value, this suggests a depreciable life of 30.5 years - as compared to 27.5 years allowed under the current tax laws. Thus, these laws are slightly favorable to multi-family properties by providing a tax depreciation rate that exceeds economic depreciation, which is in part due to inflation that has been less than expected during the past decade.

    Choice Experiments to Assess Farmers' Willingness to Participate in a Water Quality Trading Market

    Get PDF
    Interest has grown in Water Quality Trading (WQT) as a means to achieve water quality goals, with more than 70 such programs now in operation in the United States. Substantial evidence exists that nonpoint sources can reduce nutrient loading at a much lower cost than point sources, implying the existence of gains from trade. Despite the potential gains, however, the most commonly noted feature of existing WQT markets is low trading volume, with many markets resulting in zero trades. This paper evaluates one explanation for the lack of participation from agricultural nonpoint sources. We test for and quantify the intangible costs that may deter farmers from trading even if the monetary benefits from doing so outweigh the observable out-of-pocket costs. We do so by designing and implementing a series of choice experiments to elicit WQT trading behavior of Great Plains crop producers in different situations. Attributes of the choice experiment included market rules and features (e.g., application time and effort, penalties for violations, means of monitoring compliance) that may affect farmers willingness to trade. The choice experiments were conducted with a total of 135 producers at four locations in the state of Kansas between August 2006 and January 2007. A Random Parameters Logit model is appropriate to analyze the resulting data, revealing diversity in the way that the attributes affect farmers choices.Resource /Energy Economics and Policy,

    Kepler Mission Stellar and Instrument Noise Properties Revisited

    Full text link
    An earlier study of the Kepler Mission noise properties on time scales of primary relevance to detection of exoplanet transits found that higher than expected noise followed to a large extent from the stars, rather than instrument or data analysis performance. The earlier study over the first six quarters of Kepler data is extended to the full four years ultimately comprising the mission. Efforts to improve the pipeline data analysis have been successful in reducing noise levels modestly as evidenced by smaller values derived from the current data products. The new analyses of noise properties on transit time scales show significant changes in the component attributed to instrument and data analysis, with essentially no change in the inferred stellar noise. We also extend the analyses to time scales of several days, instead of several hours to better sample stellar noise that follows from magnetic activity. On the longer time scale there is a shift in stellar noise for solar-type stars to smaller values in comparison to solar values.Comment: 10 pages, 8 figures, accepted by A

    Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    Get PDF
    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moir pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moir pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities
    corecore