1,410 research outputs found
High temperature thermocouple design provides gas cooling without increasing overall size of unit
High temperature thermocouple uses a thermoelement of noncircular cross section with insulation of circular cross section to provide space for the flow of coolant gas down the probe
Silicon solar cell monitors high temperature furnace operation
Silicon solar cell, attached to each viewpoint, monitors that incandescent emission from the hot interior of a furnace without interfering with the test assembly or optical pyrometry during the test. This technique can provide continuous indication of hot spots or provide warning of excessive temperatures in cooler regions
Vapor deposition process provides new method for fabricating high temperature thermocouples
Fabrication techniques for high temperature thermocouples bind all components so that differential thermal expansion and contraction do not result in mechanical slippage and localized stress concentrations. Installation space is reduced or larger thermoelements and thicker insulation can be used to improve temperature measurement accuracy
Thoriated tungsten tube provides improved high temperature thermocouple sheath
Thermocouple tubing of thoriated tungsten with a very fine grain structure produces a small-diameter sheath capable of operating up to 5000 degrees R in a hydrogen and graphite environment. This tubing remains ductile and resists both grain growth and carbiding even after prolonged exposure to temperature
Bayesian Analysis of Instrumental Variable Models: Acceptance-Rejection within Direct Monte Carlo
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly non-elliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors' effect on the dependent variable, if the number of instruments is greater than m +r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known "Metropolis-Hastings within Gibbs" sampling in the sense that one 'more difficult' step is used within an 'easier' simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case
Cells under pressure – the relationship between hydrostatic pressure and mesenchymal stem cell chondrogenesis
Early osteoarthritis (OA), characterised by cartilage defects, is a degenerative disease that greatly affects the adult population. Cell-based tissue engineering methods are being explored as a solution for the treatment of these chondral defects. Chondrocytes are already in clinical use but other cell types with chondrogenic properties, such as mesenchymal stem cells (MSCs), are being researched. However, present methods for differentiating these cells into stable articular-cartilage chondrocytes that contribute to joint regeneration are not effective, despite extensive investigation. Environmental stimuli, such as mechanical forces, influence chondrogenic response and are beneficial with respect to matrix formation. In vivo, the cartilage is subjected to multiaxial loading involving compressive, tensile, shear and fluid flow and cellular response. Tissue formation mechanobiology is being intensively studied in the cartilage tissue-engineering research field. The study of the effects of hydrostatic pressure on cartilage formation belongs to the large area of mechanobiology. During cartilage loading, interstitial fluid is pressurised and the surrounding matrix delays pressure loss by reducing fluid flow rate from pressurised regions. This fluid pressurisation is known as hydrostatic pressure, where a uniform stress around the cell occurs without cellular deformation. In vitro studies, examining chondrocytes under hydrostatic pressure, have described its anabolic effect and similar studies have evaluated the effect of hydrostatic pressure on MSC chondrogenesis. The present review summarises the results of these studies and discusses the mechanisms through which hydrostatic pressure exerts its effects
Maximum Entropy and Bayesian Data Analysis: Entropic Priors
The problem of assigning probability distributions which objectively reflect
the prior information available about experiments is one of the major stumbling
blocks in the use of Bayesian methods of data analysis. In this paper the
method of Maximum (relative) Entropy (ME) is used to translate the information
contained in the known form of the likelihood into a prior distribution for
Bayesian inference. The argument is inspired and guided by intuition gained
from the successful use of ME methods in statistical mechanics. For experiments
that cannot be repeated the resulting "entropic prior" is formally identical
with the Einstein fluctuation formula. For repeatable experiments, however, the
expected value of the entropy of the likelihood turns out to be relevant
information that must be included in the analysis. The important case of a
Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure
Differential expression analysis with global network adjustment
<p>Background: Large-scale chromosomal deletions or other non-specific perturbations of the transcriptome can alter the expression of hundreds or thousands of genes, and it is of biological interest to understand which genes are most profoundly affected. We present a method for predicting a gene’s expression as a function of other genes thereby accounting for the effect of transcriptional regulation that confounds the identification of genes differentially expressed relative to a regulatory network. The challenge in constructing such models is that the number of possible regulator transcripts within a global network is on the order of thousands, and the number of biological samples is typically on the order of 10. Nevertheless, there are large gene expression databases that can be used to construct networks that could be helpful in modeling transcriptional regulation in smaller experiments.</p>
<p>Results: We demonstrate a type of penalized regression model that can be estimated from large gene expression databases, and then applied to smaller experiments. The ridge parameter is selected by minimizing the cross-validation error of the predictions in the independent out-sample. This tends to increase the model stability and leads to a much greater degree of parameter shrinkage, but the resulting biased estimation is mitigated by a second round of regression. Nevertheless, the proposed computationally efficient “over-shrinkage” method outperforms previously used LASSO-based techniques. In two independent datasets, we find that the median proportion of explained variability in expression is approximately 25%, and this results in a substantial increase in the signal-to-noise ratio allowing more powerful inferences on differential gene expression leading to biologically intuitive findings. We also show that a large proportion of gene dependencies are conditional on the biological state, which would be impossible with standard differential expression methods.</p>
<p>Conclusions: By adjusting for the effects of the global network on individual genes, both the sensitivity and reliability of differential expression measures are greatly improved.</p>
Bayesian Inference in Processing Experimental Data: Principles and Basic Applications
This report introduces general ideas and some basic methods of the Bayesian
probability theory applied to physics measurements. Our aim is to make the
reader familiar, through examples rather than rigorous formalism, with concepts
such as: model comparison (including the automatic Ockham's Razor filter
provided by the Bayesian approach); parametric inference; quantification of the
uncertainty about the value of physical quantities, also taking into account
systematic effects; role of marginalization; posterior characterization;
predictive distributions; hierarchical modelling and hyperparameters; Gaussian
approximation of the posterior and recovery of conventional methods, especially
maximum likelihood and chi-square fits under well defined conditions; conjugate
priors, transformation invariance and maximum entropy motivated priors; Monte
Carlo estimates of expectation, including a short introduction to Markov Chain
Monte Carlo methods.Comment: 40 pages, 2 figures, invited paper for Reports on Progress in Physic
An Adaptive Interacting Wang-Landau Algorithm for Automatic Density Exploration
While statisticians are well-accustomed to performing exploratory analysis in
the modeling stage of an analysis, the notion of conducting preliminary
general-purpose exploratory analysis in the Monte Carlo stage (or more
generally, the model-fitting stage) of an analysis is an area which we feel
deserves much further attention. Towards this aim, this paper proposes a
general-purpose algorithm for automatic density exploration. The proposed
exploration algorithm combines and expands upon components from various
adaptive Markov chain Monte Carlo methods, with the Wang-Landau algorithm at
its heart. Additionally, the algorithm is run on interacting parallel chains --
a feature which both decreases computational cost as well as stabilizes the
algorithm, improving its ability to explore the density. Performance is studied
in several applications. Through a Bayesian variable selection example, the
authors demonstrate the convergence gains obtained with interacting chains. The
ability of the algorithm's adaptive proposal to induce mode-jumping is
illustrated through a trimodal density and a Bayesian mixture modeling
application. Lastly, through a 2D Ising model, the authors demonstrate the
ability of the algorithm to overcome the high correlations encountered in
spatial models.Comment: 33 pages, 20 figures (the supplementary materials are included as
appendices
- …