304 research outputs found

    AXONAL TRANSPORT AND TURNOVER OF PROLINE- AND LEUCINE-LABELED PROTEIN IN THE GOLDFISH VISUAL SYSTEM

    Full text link
    The suitability of radioactively labeled proline as a marker of axonally transported protein in the goldfish visual system is further investigated and compared with another amino acid, leucine, in double-label experiments. Intraocularly injected proline is incorporated into protein in the eye S times more efficiently than is leucine, while local labeling of brain protein from precursor which has left the eye and entered the blood, (observed in the ipsilateral optic tectum) is five- to eight-fold less from proline than from leucine. The difference is attributed to the superior transport of leucine, an essential amino acid, into the brain from the blood. Once in the brain, the apparent rates of incorporation of the two amino acids are similar. Proline- or leucine-labeled, axonally transported proteins have a longer apparent half-life in the brain than do proteins labeled from intracranial injection of the precursors. By either route, proline-labeled proteins have a longer apparent half-life than leucine-labeled proteins. It is proposed that proline, released from protein breakdown is reutilized to a greater extent than is leucine.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65647/1/j.1471-4159.1974.tb10757.x.pd

    Intracerebroventricular administration of N-acetylaspartylglutamate (NAAG) peptidase inhibitors is analgesic in inflammatory pain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The peptide neurotransmitter <it>N</it>-Acetylaspartylglutamate (NAAG) is the third most prevalent transmitter in the mammalian central nervous system. Local, intrathecal and systemic administration of inhibitors of enzymes that inactivate NAAG decrease responses to inflammatory pain in rat models. Consistent with NAAG's activation of group II metabotropic glutamate receptors, this analgesia is blocked by a group II antagonist.</p> <p>Results</p> <p>This research aimed at determining if analgesia obtained following systemic administration of NAAG peptidase inhibitors is due to NAAG activation of group II mGluRs in brain circuits that mediate perception of inflammatory pain. NAAG and NAAG peptidase inhibitors, ZJ43 and 2-PMPA, were microinjected into a lateral ventricle prior to injection of formalin in the rat footpad. Each treatment reduced the early and late phases of the formalin-induced inflammatory pain response in a dose-dependent manner. The group II mGluR antagonist reversed these analgesic effects consistent with the conclusion that analgesia was mediated by increasing NAAG levels and the peptide's activation of group II receptors.</p> <p>Conclusion</p> <p>These data contribute to proof of the concept that NAAG peptidase inhibition is a novel therapeutic approach to inflammatory pain and that these inhibitors achieve analgesia by elevating synaptic levels of NAAG within pain processing circuits in brain.</p

    Temperature-dependent consolidation of puromycin-susceptible memory in the goldfish

    Full text link
    Memory of a shock-avoidance task in goldfish (Carassius auratus) maintained at 20[deg]C shows a temporal gradient of insusceptibility to post-trial injection of puromycin upon testing 7 days later. Treatment with the antimetabolite 24 hr after training has no effect on retention. There is a significant decrease in the puromycin-induced memory loss if fish are warmed to 30[deg]C for a 90-minute interval between conditioning and injection of puromycin. If fish are cooled to 4.5[deg]C for 24 hr between learning and puromycin injection, a significant block of memory results. There are in addition time-independent effects of the cold treatment on performance. Although temperature increase from 20 to 30[deg]C does not in itself affect retention, it does cause a 3-fold stimulation of incorporation of 3H-leucine into brain protein. Decrease in temperature from 20 to 4.5[deg]C reduces protein labeling by 86-97 percent.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/33819/1/0000076.pd

    Mapping evapotranspiration with high-resolution aircraft imagery over vineyards using one- and two-source modeling schemes

    Get PDF
    Thermal and multispectral remote sensing data from low-altitude aircraft can provide high spatial resolution necessary for sub-field ( 10 m) and plant canopy (1 m) scale evapotranspiration (ET) monitoring. In this study, highresolution (sub-meter-scale) thermal infrared and multispectral shortwave data from aircraft are used to map ET over vineyards in central California with the two-source energy balance (TSEB) model and with a simple model having operational immediate capabilities called DATTUTDUT (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature). The latter uses contextual information within the image to scale between radiometric land surface temperature (TR) values representing hydrologic limits of potential ET and a non-evaporative surface. Imagery from 5 days throughout the growing season is used for mapping ET at the sub-field scale. The performance of the two models is evaluated using tower-based measurements of sensible (H) and latent heat (LE) flux or ET. The comparison indicates that TSEB was able to derive reasonable ET estimates under varying conditions, likely due to the physically based treatment of the energy and the surface temperature partitioning between the soil/cover crop inter-row and vine canopy elements. On the other hand, DATTUTDUT performance was somewhat degraded presumably because the simple scaling scheme does not consider differences in the two sources (vine and inter-row) of heat and temperature contributions or the effect of surface roughness on the efficiency of heat exchange. Maps of the evaporative fraction (EFDLE/(H CLE)) from the two models had similar spatial patterns but different magnitudes in some areas within the fields on certain days. Large EF discrepancies between the models were found on 2 of the 5 days (DOY 162 and 219) when there were significant differences with the tower-based ET measurements, particularly using the DATTUTDUT model. These differences in EF between the models translate to significant variations in daily water use estimates for these 2 days for the vineyards. Model sensitivity analysis demonstrated the high degree of sensitivity of the TSEB model to the accuracy of the TR data, while the DATTUTDUT model was insensitive to systematic errors in TR as is the case with contextual-based models. However, it is shown that the study domain and spatial resolution will significantly influence the ET estimation from the DATTUTDUT model. Future work is planned for developing a hybrid approach that leverages the strengths of both modeling schemes and is simple enough to be used operationally with high-resolution imagery

    Estimating and Reporting on the Quality of Inpatient Stroke Care by Veterans Health Administration Medical Centers

    Get PDF
    Background—Reporting of quality indicators (QIs) in Veterans Health Administration Medical Centers is complicated by estimation error caused by small numbers of eligible patients per facility. We applied multilevel modeling and empirical Bayes (EB) estimation in addressing this issue in performance reporting of stroke care quality in the Medical Centers. Methods and Results—We studied a retrospective cohort of 3812 veterans admitted to 106 Medical Centers with ischemic stroke during fiscal year 2007. The median number of study patients per facility was 34 (range, 12–105). Inpatient stroke care quality was measured with 13 evidence-based QIs. Eligible patients could either pass or fail each indicator. Multilevel modeling of a patient's pass/fail on individual QIs was used to produce facility-level EB-estimated QI pass rates and confidence intervals. The EB estimation reduced interfacility variation in QI rates. Small facilities and those with exceptionally high or low rates were most affected. We recommended 8 of the 13 QIs for performance reporting: dysphagia screening, National Institutes of Health Stroke Scale documentation, early ambulation, fall risk assessment, pressure ulcer risk assessment, Functional Independence Measure documentation, lipid management, and deep vein thrombosis prophylaxis. These QIs displayed sufficient variation across facilities, had room for improvement, and identified sites with performance that was significantly above or below the population average. The remaining 5 QIs were not recommended because of too few eligible patients or high pass rates with little variation. Conclusions—Considerations of statistical uncertainty should inform the choice of QIs and their application to performance reporting

    Evaluating the two-source energy balance model using local thermal and surface flux observations in a strongly advective irrigated agricultural area

    Get PDF
    Application and validation of many thermal remote sensing-based energy balance models involve the use of local meteorological inputs of incoming solar radiation, wind speed and air temperature as well as accurate land surface temperature (LST), vegetation cover and surface flux measurements. For operational applications at large scales, such local information is not routinely available. In addition, the uncertainty in LST estimates can be several degrees due to sensor calibration issues, atmospheric effects and spatial variations in surface emissivity. Time differencing techniques using multi-temporal thermal remote sensing observations have been developed to reduce errors associated with deriving the surface- air temperature gradient, particularly in complex landscapes. The Dual-Temperature-Difference (DTD) method addresses these issues by utilizing the Two-Source Energy Balance (TSEB) model of Norman et al. (1995) [1], and is a relatively simple scheme requiring meteorological input from standard synoptic weather station networks or mesoscale modeling. A comparison of the TSEB and DTD schemes is performed using LST and flux observations from eddy covariance (EC) flux towers and large weighing lysimeters (LYs) in irrigated cotton fields collected during BEAREX08, a large-scale field experiment conducted in the semi-arid climate of the Texas High Plains as described by Evett et al. (2012) [2]. Model output of the energy fluxes (i.e., net radiation, soil heat flux, sensible and latent heat flux) generated with DTD and TSEB using local and remote meteorological observations are compared with EC and LY observations. The DTD method is found to be significantly more robust in flux estimation compared to the TSEB using the remote meteorological observations. However, discrepancies between model and measured fluxes are also found to be significantly affected by the local inputs of LST and vegetation cover and the representativeness of the remote sensing observations with the local flux measurement footprint

    Regional and scale-specific effects of land use on amphibian diversity [poster]

    Get PDF
    Background/Question/Methods Habitat loss and degradation influence amphibian distributions and are important drivers of population declines. Our previous research demonstrated that road disturbance, development and wetland area consistently influence amphibian richness across regions of the U.S. Here, we examined the relative importance of these factors in different regions and at multiple spatial scales. Understanding the scales at which habitat disturbance may be affecting amphibian distributions is important for conservation planning. Specifically, we asked: 1) Over what spatial scales do distinct landscape features affect amphibian richness? and 2) Do road types (non-rural and rural) have similar effects on amphibian richness? This is the second year of a collaborative, nationwide project involving 11 U.S. colleges integrated within undergraduate biology curricula. We summarized North American Amphibian Monitoring Program data in 13 Eastern and Central U.S states and used geographic information systems to extract landscape data for 471 survey locations. We developed models to quantify the influence of landscape variables on amphibian species richness and site occupancy across five concentric buffers ranging from 300m to 10,000m. Results/Conclusions Across spatial scales, development, road density and agriculture were the best predictors of amphibian richness and site occupancy by individual species. Across regions, we found that scale did not exert a large influence on how landscape features influenced amphibian richness as effects were largely comparable across buffers. However, development and percent impervious surface had stronger influence on richness at smaller spatial scales. Richness was lower at survey locations with higher densities of non-rural and rural roads, and non-rural road density had a larger negative effect at smaller scales. Within regions, landscape features driving patterns of species richness varied. The scales at which these factors were associated with richness were highly variable within regions, suggesting the scale effects may be region specific. Our project demonstrates that networks of undergraduate students can collaborate to compile and analyze large ecological data sets, while engaging students in authentic and inquiry-based learning in landscape-scale ecology

    Analysis of Rare, Exonic Variation amongst Subjects with Autism Spectrum Disorders and Population Controls

    Get PDF
    We report on results from whole-exome sequencing (WES) of 1,039 subjects diagnosed with autism spectrum disorders (ASD) and 870 controls selected from the NIMH repository to be of similar ancestry to cases. The WES data came from two centers using different methods to produce sequence and to call variants from it. Therefore, an initial goal was to ensure the distribution of rare variation was similar for data from different centers. This proved straightforward by filtering called variants by fraction of missing data, read depth, and balance of alternative to reference reads. Results were evaluated using seven samples sequenced at both centers and by results from the association study. Next we addressed how the data and/or results from the centers should be combined. Gene-based analyses of association was an obvious choice, but should statistics for association be combined across centers (meta-analysis) or should data be combined and then analyzed (mega-analysis)? Because of the nature of many gene-based tests, we showed by theory and simulations that mega-analysis has better power than meta-analysis. Finally, before analyzing the data for association, we explored the impact of population structure on rare variant analysis in these data. Like other recent studies, we found evidence that population structure can confound case-control studies by the clustering of rare variants in ancestry space; yet, unlike some recent studies, for these data we found that principal component-based analyses were sufficient to control for ancestry and produce test statistics with appropriate distributions. After using a variety of gene-based tests and both meta- and mega-analysis, we found no new risk genes for ASD in this sample. Our results suggest that standard gene-based tests will require much larger samples of cases and controls before being effective for gene discovery, even for a disorder like ASD. © 2013 Liu et al
    • …
    corecore