2,705 research outputs found
Tree biomass equations from terrestrial LiDAR : a case study in Guyana
Large uncertainties in tree and forest carbon estimates weaken national efforts to accurately estimate aboveground biomass (AGB) for their national monitoring, measurement, reporting and verification system. Allometric equations to estimate biomass have improved, but remain limited. They rely on destructive sampling; large trees are under-represented in the data used to create them; and they cannot always be applied to different regions. These factors lead to uncertainties and systematic errors in biomass estimations. We developed allometric models to estimate tree AGB in Guyana. These models were based on tree attributes (diameter, height, crown diameter) obtained from terrestrial laser scanning (TLS) point clouds from 72 tropical trees and wood density. We validated our methods and models with data from 26 additional destructively harvested trees. We found that our best TLS-derived allometric models included crown diameter, provided more accurate AGB estimates (R-2 = 0.92-0.93) than traditional pantropical models (R-2 = 0.85-0.89), and were especially accurate for large trees (diameter > 70 cm). The assessed pantropical models underestimated AGB by 4 to 13%. Nevertheless, one pantropical model (Chave et al. 2005 without height) consistently performed best among the pantropical models tested (R-2 = 0.89) and predicted AGB accurately across all size classes-which but for this could not be known without destructive or TLS-derived validation data. Our methods also demonstrate that tree height is difficult to measure in situ, and the inclusion of height in allometric models consistently worsened AGB estimates. We determined that TLS-derived AGB estimates were unbiased. Our approach advances methods to be able to develop, test, and choose allometric models without the need to harvest trees
Magneto-optic Kerr effect in a spin-polarized zero-moment ferrimagnet
The magneto-optical Kerr effect (MOKE) is often assumed to be proportional to
the magnetisation of a magnetically ordered metallic sample; in metallic
ferrimagnets with chemically distinct sublattices, such as rare-earth
transition-metal alloys, it depends on the difference between the sublattice
contributions. Here we show that in a highly spin polarized, fully compensated
ferrimagnet, where the sublattices are chemically similar, MOKE is observed
even when the net moment is strictly zero. We analyse the spectral ellipsometry
and MOKE of Mn 2 Ru x Ga, and show that this behaviour is due to a highly
spin-polarized conduction band dominated by one of the two manganese
sublattices which creates helicity-dependent reflectivity determined by a broad
Drude tail. Our findings open new prospects for studying spin dynamics in the
infra-red.Comment: 7 pages, 7 figure
Transhiatal esophagectomy in the profoundly obese: implications and experience.
BACKGROUND: Historically, obesity contraindicated an abdominal approach to the esophagogastric junction. The technique of transhiatal esophagectomy (THE) evolved without specific regard to body habitus. The dramatic increase in obese patients requiring an esophagectomy for complications of reflux disease prompted this evaluation of the impact of obesity on the outcomes of esophagectomy to determine whether profound obesity should contraindicate the transhiatal approach. METHODS: We used our Esophagectomy Database to identify 133 profoundly obese patients (body mass index [BMI] > or = 35 kg/m2) from among 2176 undergoing a THE from 1977 to 2006. This group was matched to a randomly selected, non-obese (BMI, 18.5 to 30 kg/m2) control population of 133 patients. Intraoperative, postoperative, and long-term follow-up results were compared retrospectively. RESULTS: Profoundly obese patients had significantly greater intraoperative blood loss (mean, 492.2 mL versus 361.8 mL, p = 0.001), need for partial sternotomy (18 versus 3, p = 0.001), and frequency of recurrent laryngeal nerve injury (6 versus 0, p = 0.04). The two groups did not differ significantly in the occurrence of chylothorax, wound infection, or dehiscence rate; length of hospital stay or need for intensive care unit stay; or hospital or operative mortality. Follow-up results for dysphagia, dumping, regurgitation, and overall functional score were also comparable between the two groups. CONCLUSIONS: With appropriate instrumentation, transhiatal esophagectomy in obese patients has similar morbidity and outcomes as in non-obese patients. Obesity, even when profound, does not contraindicate a transhiatal esophagectomy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/57503/6/Scipione 2007.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/57503/5/Benign BMI Control.txthttp://deepblue.lib.umich.edu/bitstream/2027.42/57503/4/Benign BMI CS07.txthttp://deepblue.lib.umich.edu/bitstream/2027.42/57503/3/CA BMI Control no pt id.txthttp://deepblue.lib.umich.edu/bitstream/2027.42/57503/2/CA BMI 35 CS.tx
Closing the Gap between Methodologists and End-Users: R as a Computational Back-End
The R environment provides a natural platform for developing new statistical methods due to the mathematical expressiveness of the language, the large number of existing libraries, and the active developer community. One drawback to R, however, is the learning curve; programming is a deterrent to non-technical users, who typically prefer graphical user interfaces (GUIs) to command line environments. Thus, while statisticians develop new methods in R, practitioners are often behind in terms of the statistical techniques they use as they rely on GUI applications. Meta-analysis is an instructive example; cutting-edge meta-analysis methods are often ignored by the overwhelming majority of practitioners, in part because they have no easy way of applying them. This paper proposes a strategy to close the gap between the statistical state-of-the-science and what is applied in practice. We present open-source meta-analysis software that uses R as the underlying statistical engine, and Python for the GUI. We present a framework that allows methodologists to implement new methods in R that are then automatically integrated into the GUI for use by end-users, so long as the programmer conforms to our interface. Such an approach allows an intuitive interface for non-technical users while leveraging the latest advanced statistical methods implemented by methodologists
Meta-Analyst: software for meta-analysis of binary, continuous and diagnostic data
<p>Abstract</p> <p>Background</p> <p>Meta-analysis is increasingly used as a key source of evidence synthesis to inform clinical practice. The theory and statistical foundations of meta-analysis continually evolve, providing solutions to many new and challenging problems. In practice, most meta-analyses are performed in general statistical packages or dedicated meta-analysis programs.</p> <p>Results</p> <p>Herein, we introduce Meta-Analyst, a novel, powerful, intuitive, and free meta-analysis program for the meta-analysis of a variety of problems. Meta-Analyst is implemented in C# atop of the Microsoft .NET framework, and features a graphical user interface. The software performs several meta-analysis and meta-regression models for binary and continuous outcomes, as well as analyses for diagnostic and prognostic test studies in the frequentist and Bayesian frameworks. Moreover, Meta-Analyst includes a flexible tool to edit and customize generated meta-analysis graphs (e.g., forest plots) and provides output in many formats (images, Adobe PDF, Microsoft Word-ready RTF). The software architecture employed allows for rapid changes to be made to either the Graphical User Interface (GUI) or to the analytic modules.</p> <p>We verified the numerical precision of Meta-Analyst by comparing its output with that from standard meta-analysis routines in Stata over a large database of 11,803 meta-analyses of binary outcome data, and 6,881 meta-analyses of continuous outcome data from the Cochrane Library of Systematic Reviews. Results from analyses of diagnostic and prognostic test studies have been verified in a limited number of meta-analyses versus MetaDisc and MetaTest. Bayesian statistical analyses use the OpenBUGS calculation engine (and are thus as accurate as the standalone OpenBUGS software).</p> <p>Conclusion</p> <p>We have developed and validated a new program for conducting meta-analyses that combines the advantages of existing software for this task.</p
Law, responsibility, and the brain.
published_or_final_versio
A functional variant on 20q13.33 related to glioma risk alters enhancer activity and modulates expression of multiple genes.
Genome-wide association studies (GWAS) have identified single-nucleotide polymorphisms (SNPs) associated with glioma risk on 20q13.33, but the biological mechanisms underlying this association are unknown. We tested the hypothesis that a functional SNP on 20q13.33 impacted the activity of an enhancer, leading to an altered expression of nearby genes. To identify candidate functional SNPs, we identified all SNPs in linkage disequilibrium with the risk-associated SNP rs2297440 that mapped to putative enhancers. Putative enhancers containing candidate functional SNPs were tested for allele-specific effects in luciferase enhancer activity assays against glioblastoma multiforme (GBM) cell lines. An enhancer containing SNP rs3761124 exhibited allele-specific effects on activity. Deletion of this enhancer by CRISPR-Cas9 editing in GBM cell lines correlated with an altered expression of multiple genes, including STMN3, RTEL1, RTEL1-TNFRSF6B, GMEB2, and SRMS. Expression quantitative trait loci (eQTL) analyses using nondiseased brain samples, isocitrate dehydrogenase 1 (IDH1) wild-type glioma, and neurodevelopmental tissues showed STMN3 to be a consistent significant eQTL with rs3761124. RTEL1 and GMEB2 were also significant eQTLs in the context of early CNS development and/or in IDH1 wild-type glioma. We provide evidence that rs3761124 is a functional variant on 20q13.33 related to glioma/GBM risk that modulates the expression of STMN3 and potentially other genes across diverse cellular contexts
Rapid, -insensitive, dual-band quasi-adiabatic saturation transfer with optimal control for complete quantification of myocardial ATP flux
Purpose: Phosphorus saturation-transfer experiments can quantify metabolic
fluxes non-invasively. Typically, the forward flux through the creatine-kinase
reaction is investigated by observing the decrease in phosphocreatine (PCr)
after saturation of -ATP. The quantification of total ATP utilisation
is currently under-explored, as it requires simultaneous saturation of
inorganic phosphate (Pi) and PCr. This is challenging, as currently available
saturation pulses reduce the already-low -ATP signal present.
Methods: Using a hybrid optimal-control and Shinnar-Le-Roux method, a
quasi-adiabatic RF pulse was designed for the dual-saturation of PCr and Pi to
enable determination of total ATP utilisation. The pulses were evaluated in
Bloch equation simulations, compared with a conventional hard-cosine DANTE
saturation sequence, before application to perfused rat hearts at 11.7 Tesla.
Results: The quasi-adiabatic pulse was insensitive to a -fold variation
in , producing equivalent saturation with a 53% reduction in delivered
pulse power and a 33-fold reduction in spillover at the minimum effective
. This enabled the complete quantification of the synthesis and
degradation fluxes for ATP in 30-45 minutes in the perfused rat heart. While
the net synthesis flux ( mM/s, SEM) was not significantly different
from degradation flux ( mM/s, ) and both measures are
consistent with prior work, nonlinear error analysis highlights uncertainties
in the Pi-to-ATP measurement that may explain a trend suggesting a possible
imbalance.
Conclusion: This work demonstrates a novel quasi-adiabatic dual-saturation RF
pulse with significantly improved performance that can be used to measure ATP
turnover in the heart in vivo.Comment: 26 pages, Accepted at Magnetic Resonance in Medicine, 24/11/2020
[This version post reviews
- …