2,209 research outputs found
Molecular behaviour of methanol and dimethyl ether in H-ZSM-5 catalysts as a function of Si/Al ratio: A quasielastic neutron scattering study
The dynamical behaviour of methanol and dimethyl ether in H-ZSM-5 catalysts of differing Si/Al ratios (36 and 135) was probed using quasielastic neutron scattering to understand the effect of catalyst composition (Brønsted acid site concentration) on the behaviour of species present during the initial stages of the H-ZSM-5 catalysed methanol-to-hydrocarbons process. At room temperature in H-ZSM-5(36) isotropic methanol rotation was observed (rotational diffusional coefficient, DR = 2.6 Ă— 1010 s-1), which contrasted qualitatively with H-ZSM-5(135) in which diffusion confined to a sphere matching the 5.5 Ă… channel width was observed, suggesting motion is more constrained in the lower Si/Al catalyst. At higher temperatures, confined methanol diffusion is exhibited in both catalysts with self-diffusion coefficients (Ds) measured in the range of 8-9 Ă— 10-10 m2 s-1. However, the population of molecules immobile over the timescale probed by the instrument is significantly larger in H-ZSM-5(36), consistent with the far higher number of Brønsted acid adsorption sites. For dimethyl ether, diffusion confined to a sphere at all temperatures is observed in both catalysts with Ds measured in the range of 9-11 Ă— 10-10 m2 s-1 and a slightly smaller fraction of immobile molecules in H-ZSM-5(135). The larger Ds values obtained for dimethyl ether arise from the sphere of confinement being larger in H-ZSM-5(36) (6.2 Ă… in diameter) than the 5.5 Ă… width of the pore channels. This larger width suggests that mobile DME is sited in the channel intersections, in contrast to the mobile methanol which is sited in the channels. An even larger confining sphere of diffusion was derived in H-ZSM-5(135) (∼8 Ă… in diameter), which we attribute to a lack of Brønsted sites, allowing for a larger free volume for DME diffusion in the channel intersections
A linear programming model for economic planning in New Zealand
A good deal of research into the likely future structure of the New Zealand economy has been carried out in the Agricultural Economics Research Unit. The aim has been to provide realistic quantitative sectoral targets or guidelines to centralised policy making bodies to assist in planning future economic growth in New Zealand. This type of exercise has often been referred to as indicative planning. Until now, the work has entailed the use of an input-output projection model which has come to be known as the Lincoln Model. Briefly, the procedure is to calculate for some future year an economic structure which satisfies the inter-industry relationships and which achieves an exogenously specified increase in the base year consumption level. Economic structure in this context means: the level of output of each sector of the model, the level of exports from each sector, the level of investment by each sector, the level of importing of current and capital goods by each sector. Whenever the Lincoln model has been discussed there has usually been some mention of the optimum economic structure. It has been said that the structure is optimum when resources are so allocated between sectors that the highest level of net national product per head is achieved, consistent with the maintenance of overseas balance of payments equilibrium, full employment and a reasonable growth in incomes per head. While many would question this definition, it is probably a reasonable basis on which to begin investigations into the best future shape of the economy and it is certainly where scrutiny of the projected structure should begin. It has also been suggested that the most efficient method of investigating the nature of an optimum structure is by the use of mathematical programming methods. The purpose of this paper is to demonstrate how the linear programming technique might be used to calculate the optimum economic structure, although it has been found necessary to modify the definition quoted above. Instead of accepting an exogenous target for consumption, programming is used to calculate the maximum level of consumption consistent with the inter-industry relationships and resource availabilities. The need to formulate linear functions has prevented optimisation of consumption per head which would be more acceptable theoretically
Recommended from our members
Estimating cost-offsets of new medications: Use of new antipsychotics and mental health costs for schizophrenia
Estimation of the effect of one treatment compared to another in the absence of randomization is a common problem in biostatistics. An increasingly popular approach involves instrumental variables—variables that are predictive of who received a treatment yet not directly predictive of the outcome. When treatment is binary, many estimators have been proposed: method-of-moments estimators using a two-stage least-squares procedure, generalized-method-of-moments estimators using two-stage predictor substitution or two-stage residual inclusion procedures, and likelihood-based latent variable approaches. The critical assumptions to the consistency of two-stage procedures and of the likelihood-based procedures differ. Because neither set of assumptions can be completely tested from the observed data alone, comparing the results from the different approaches is an important sensitivity analysis. We provide a general statistical framework for estimation of the casual effect of a binary treatment on a continuous outcome using simultaneous equations to specify models. A comparison of health care costs for adults with schizophrenia treated with newer atypical antipsychotics and those treated with conventional antipsychotic medications illustrates our methods. Surprisingly large differences in the results among the methods are investigated using a simulation study. Several new findings concerning the performance in terms of precision and robustness of each approach in different situations are obtained. We illustrate that in general supplemental information is needed to determine which analysis, if any, is trustworthy and reaffirm that comparing results from different approaches is a valuable sensitivity analysis
Nuclear Signaling Pathways for 1,25-Dihydroxyvitamin D 3 Are Controlled by the Vitamin A Metabolite, 9-cis-Retinoic Acid
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75392/1/j.1753-4887.1993.tb03060.x.pd
Non-intrusive reduced order modeling of natural convection in porous media using convolutional autoencoders: comparison with linear subspace techniques
Natural convection in porous media is a highly nonlinear multiphysical
problem relevant to many engineering applications (e.g., the process of
sequestration). Here, we present a non-intrusive reduced order
model of natural convection in porous media employing deep convolutional
autoencoders for the compression and reconstruction and either radial basis
function (RBF) interpolation or artificial neural networks (ANNs) for mapping
parameters of partial differential equations (PDEs) on the corresponding
nonlinear manifolds. To benchmark our approach, we also describe linear
compression and reconstruction processes relying on proper orthogonal
decomposition (POD) and ANNs. We present comprehensive comparisons among
different models through three benchmark problems. The reduced order models,
linear and nonlinear approaches, are much faster than the finite element model,
obtaining a maximum speed-up of because our framework is not
bound by the Courant-Friedrichs-Lewy condition; hence, it could deliver
quantities of interest at any given time contrary to the finite element model.
Our model's accuracy still lies within a mean squared error of 0.07 (two-order
of magnitude lower than the maximum value of the finite element results) in the
worst-case scenario. We illustrate that, in specific settings, the nonlinear
approach outperforms its linear counterpart and vice versa. We hypothesize that
a visual comparison between principal component analysis (PCA) or t-Distributed
Stochastic Neighbor Embedding (t-SNE) could indicate which method will perform
better prior to employing any specific compression strategy
- …