217 research outputs found

    Net Ecosystem Carbon Balance in a North Carolina, USA, Salt Marsh

    Get PDF
    Salt marshes have among the highest carbon (C) burial rates of any ecosystem and often rely on C accumulation to gain elevation and persist in locations with accelerating sea level rise. Net ecosystem carbon balance (NECB), the accumulation or loss of C resulting from vertical CO2 and CH4 gas fluxes, lateral C fluxes, and sediment C inputs, varies across salt marshes; thus, extrapolation of NECB to an entire marsh is challenging. Anthropogenic nitrogen (N) inputs to salt marshes impact NECB by influencing each component of NECB, but differences in the impacts of fertilization between edge and interior marsh must be considered when scaling up. NECB was estimated for the 0.5 km2 Spartina alterniflora marsh area of Freeman Creek, NC, under control and fertilized conditions at both interior and edge berm sites. Annual CO2 fluxes were nearly balanced at control sites, but fertilization significantly increased net CO2 emissions at edge sites. Lateral C export, modeled using respiration rates, represented a significant C loss that increased with fertilization in both edge and interior marsh. Sediment C input was a significant C source in the interior, nearly doubling with fertilization, but represented a small source on the edge. When extrapolating C exchanges to the entire marsh, including edge which comprised 17% of the marsh area, the marsh displayed net loss of C despite a net C gain in the interior. Fertilization increased net C loss fivefold. Extrapolation of NECB to whole marshes requires inclusion of C fluxes for both edge and interior marsh

    Reliable solid-state circuits Semiannual report no. 2, Jun. 1 - Nov. 30, 1965

    Get PDF
    Pulse width modulator and other microminiaturized electronic equipment for space age application

    A method for using shoreline morphology to predict suspended sediment concentration in tidal creeks

    Get PDF
    Improving mechanistic prediction of shoreline response to sea level rise is currently limited by 1) morphologic complexity of tidal creek shorelines that confounds application of mechanistic models, and 2) availability of suspended sediment measurements to parameterize mechanistic models. To address these challenges we developed a metric to distinguish two morphodynamic classes of tidal creek and tested whether this metric could be used to predict suspended sediment concentration. We studied three small tidal creeks in North Carolina, U.S.A. We collected suspended sediment at one non-tidal and two tidal sites in each creek and measured the wetland and channel width using a geographic information system. In each creek, tidal harmonics were measured for one year, sediment accretion on the salt marsh was measured for three years, and shoreline erosion was measured from aerial photographs spanning 50�years. Additional total suspended solids measurements from seven creeks reported in a national database supplemented our analysis. Among the three intensively studied creeks, shoreline erosion was highest in the most embayed creek (having a wider channel than the width of adjoining wetlands) and lowest in the wetland-dominated creek (having a channel narrower than the width of adjoining wetlands). Wetland sediment accretion rate in the wetland-dominated creek was four times higher than the accretion in the embayed creek. The wetland-dominated tidal creek had over twice the suspended sediment as the most embayed creek. Based on these results, we conclude that our metric of embayed and contrasting wetland-dominated creek morphology provides a guide for choosing between two types of morphodynamic models that are widely used to predict wetland shoreline change. This metric also allowed us to parse the 10 tidal creeks studied into two groups with different suspended sediment concentrations. This relationship between suspended sediment concentration and creek morphology provides a method to estimate sediment concentration for individual tidal creek shorelines from spatial data alone, enabling more accurate parameterization of shoreline change models

    Ultra-high throughput functional enrichment of large monoamine oxidase (MAO-N) libraries by fluorescence activated cell sorting

    Get PDF
    Directed evolution enables the improvement and optimisation of enzymes for particular applications and is a valuable tool for biotechnology and synthetic biology. However, studies are often limited in their scope by the inability to screen very large numbers of variants to identify improved enzymes. One class of enzyme for which a universal, operationally simple ultra-high throughput (>106 variants per day) assay is not available is flavin adenine dinucleotide (FAD) dependent oxidases. The current high throughput assay involves a visual, colourimetric, colony-based screen, however this is not suitable for very large libraries and does not enable quantification of the relative fitness of variants. To address this, we describe an optimised method for the sensitive detection of oxidase activity within single Escherichia coli (E. coli) cells, using the monoamine oxidase from Aspergillus niger, MAO-N, as a model system. In contrast to other methods for the screening of oxidase activity in vivo, this method does not require cell surface expression, emulsion formation or the addition of an extracellular peroxidase. Furthermore, we show that fluorescence activated cell sorting (FACS) of large libraries derived from MAO-N under the assay conditions can enrich the library in functional variants at much higher rates than via the colony-based method. We demonstrate its use for directed evolution by identifying a new mutant of MAO-N with improved activity towards a novel secondary amine substrate. This work demonstrates, for the first time, an ultra-high throughput screening methodology widely applicable for the directed evolution of FAD dependent oxidases in E. coli

    Sea Level Rise Explains Changing Carbon Accumulation Rates in a Salt Marsh Over the Past Two Millennia

    Get PDF
    High rates of carbon burial observed in wetland sediments have garnered attention as a potential “natural fix” to reduce the concentration of carbon dioxide (CO2) in Earth's atmosphere. A carbon accumulation rate (CAR) can be determined through various methods that integrate a carbon stock over different time periods, ranging from decades to millennia. Our goal was to assess how CAR changed over the lifespan of a salt marsh. We applied a geochronology to a series of salt marsh cores using both 14C and 210Pb markers to calculate CARs that were integrated between 35 and 2,460 years before present. CAR was 39 g C·m−2·year−1 when integrated over millennia but was upward of 148 g C·m−2·year−1 for the past century. We present additional evidence to account for this variability by linking it to changes in relative sea level rise (RSLR), where higher rates of RSLR were associated with higher CARs. Thus, the CAR calculated for a wetland should integrate timescales that capture the influence of contemporary RSLR. Therefore, caution should be exercised not to utilize a CAR calculated over inappropriately short or long timescales as a current assessment or forecasting tool for the climate change mitigation potential of a wetland

    Sequential design of computer experiments for the estimation of a probability of failure

    Full text link
    This paper deals with the problem of estimating the volume of the excursion set of a function f:RdRf:\mathbb{R}^d \to \mathbb{R} above a given threshold, under a probability measure on Rd\mathbb{R}^d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of ff and aim at performing evaluations of ff as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.Comment: This is an author-generated postprint version. The published version is available at http://www.springerlink.co

    Using estrus detection patches to optimally time insemination improved pregnancy risk in suckled beef cows enrolled in a fixed-time artificial insemination program

    Get PDF
    Citation: Hill, S. L., Grieger, D. M., Olson, K. C., Jaeger, J. R., Dahlen, C. R., Bridges, G. A., . . . Stevenson, J. S. (2016). Using estrus detection patches to optimally time insemination improved pregnancy risk in suckled beef cows enrolled in a fixed-time artificial insemination program. Journal of Animal Science, 94(9), 3703-3710. doi:10.2527/jas2016-0469A multilocation study examined pregnancy risk (PR) after delaying AI in suckled beef cows from 60 to 75 h when estrus had not been detected by 60 h in response to a 7-d CO-Synch + progesterone insert (CIDR) timed AI (TAI) program (d-7: CIDR insert concurrent with an injection of GnRH; d 0: PGF(2 alpha) injection and removal of CIDR insert; and GnRH injection at TAI [ 60 or 75 h after CIDR removal]). A total of 1,611 suckled beef cows at 15 locations in 9 states (CO, IL, KS, MN, MS, MT, ND, SD, and VA) were enrolled. Before applying the fixed-time AI program, BCS was assessed, and blood samples were collected. Estrus was defined to have occurred when an estrus detection patch was >50% colored (activated). Pregnancy was determined 35 d after AI via transrectal ultrasound. Cows (n = 746) detected in estrus by 60 h (46.3%) after CIDR removal were inseminated and treated with GnRH at AI (Control). Remaining nonestrous cows were allocated within location to 3 treatments on the basis of parity and days postpartum: 1) GnRH injection and AI at 60 h (early-early = EE; n = 292), 2) GnRH injection at 60 h and AI at 75 h (early-delayed = ED; n = 282), or 3) GnRH injection and AI at 75 h (delayed-delayed = DD; n = 291). Control cows had a greater (P < 0.01) PR (64.2%) than other treatments (EE = 41.7%, ED = 52.8%, DD = 50.0%). Use of estrus detection patches to delay AI in cows not in estrus by 60 h after CIDR insert removal (ED and DD treatments) increased (P < 0.05) PR to TAI when compared with cows in the EE treatment. More (P < 0.001) cows that showed estrus by 60 h conceived to AI at 60 h than those not showing estrus (64.2% vs. 48.1%). Approximately half (49.2%) of the cows not in estrus by 60 h had activated patches by 75 h, resulting in a greater (P < 0.05) PR than their nonestrous herd mates in the EE (46.1% vs. 34.5%), ED (64.2% vs. 39.2%), and DD (64.8% vs. 31.5%) treatments, respectively. Overall, cows showing estrus by 75 h (72.7%) had greater (P < 0.001) PR to AI (61.3% vs. 37.9%) than cows not showing estrus. Use of estrus detection patches to allow for a delayed AI in cows not in estrus by 60 h after removal of the CIDR insert improved PR to TAI by optimizing the timing of the AI in those cows
    corecore