375 research outputs found
Recommended from our members
Tokamak equilibria and transport based on Grad`s thirteen moment description
In this thesis, I study collisional transport of a hot magnetically confined plasma in a tokamak. The weakly collisional plasma is modeled by Grad`s two-fluid thirteen moment equations. This model provides a better treatment of the stresses and the heat fluxes than do collisional fluid models such as Braginski`s. Using physical parameters for a typical tokamak, I estimate the orders of magnitude of various effects. I obtain a reduced system by neglecting small terms in the two-fluid thirteen moment equations. This reduced model includes small particle flows, pressure anisotropy and temperature variation within flux surfaces. The reduced model is compared with standard fluid models. To understand better the behavior of solutions of this system, I expand the solution in a formal series in powers of the small parameter (m{sub e}/m{sub i}){sup 1/4}. Flux coordinates are used to solve the equations in a general axisymmetric geometry. In lowest order, the equilibrium solution consists of a number of arbitrary flux functions together with a Grad-Shafranov equation relating the poloidal flux and the toroidal current. The energy dynamics of the system is complicated and requires determining the solution to high order. As corrections to the lowest order solution are calculated, the equilibrium is extended to successively longer time scales until on the time scale {tau}{sub e}m{sub i}/m{sub e}, time independent solutions are in general not possible. I calculate the time evolution of the lowest order solution on the time scale {tau}m{sub i}/m{sub e}, a time scale consistent with experiment
Recommended from our members
Linking seasonal forecasts into RiskView to enhance food security contingency planning
RiskView is a tool developed by the World Food Programme (WFP) to translate weather data
(real-time and historical) and other spatial information (e.g., crops, drought risk, population, etc.)
into food security needs and response costs. It serves as a swift way of estimating costs in
advance of food insecurity outlooks for financial planning, and for facilitating better resource
allocations to disasters before on-the-ground needs assessments are produced
Recommended from our members
StatisticalâDynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation
Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the regionâs interannual precipitation variability. The statisticalâdynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indoâwest Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statisticalâdynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier DecemberâMarch precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statisticalâdynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03
Tomimatsu-Sato geometries, holography and quantum gravity
We analyze the Tomimatsu-Sato spacetime in the context of the
proposed Kerr/CFT correspondence. This 4-dimensional vacuum spacetime is
asymptotically flat and has a well-defined ADM mass and angular momentum, but
also involves several exotic features including a naked ring singularity, and
two disjoint Killing horizons separated by a region with closed timelike curves
and a rod-like conical singularity. We demonstrate that the near horizon
geometry belongs to a general class of Ricci-flat metrics with
symmetry that includes both the extremal Kerr and
extremal Kerr-bolt geometries. We calculate the central charge and temperature
for the CFT dual to this spacetime and confirm the Cardy formula reproduces the
Bekenstein-Hawking entropy. We find that all of the basic parameters of the
dual CFT are most naturally expressed in terms of charges defined intrinsically
on the horizon, which are distinct from the ADM charges in this geometry.Comment: 20+1 pages, 3 figures, changed title, expanded discussion, matches
version published in CQ
Recommended from our members
Recalibrating and Combining Ensemble Predictions
The âmodel output statisticsâ (MOS) approach has long been used in forecasting to correct systematic errors of numerical models and to predict quantities not included in the model (Glahn and Lowry 1972). The MOS procedure is based on capturing the statistical relation between model outputs and observations and, in its simplest form, consists of a linear regression between these quantities. In theory, this procedure optimally calibrates the model forecast and provides reliable forecasts.
In practice, the regression parameters must be estimated from data. In seasonal forecasting, forecast histories are short, and skill is modest. Both factors lead to substantial sampling errors in the estimates. This work examines two problems where sampling error affects the reliability of regression-calibrated forecasts and provides solutions based on two âpenalizedâ methods: ridge regression and lasso regression (Hoerl and Kennard 1988; Tibshirani 1996). The first problem comes from the observation that, even in a bivariate setting, ordinary least squares estimates lead to unreliable forecasts. The second problem arises in the context of multivariate MOS and is that common methods of predictor selection lead to negative skill and unreliable forecasts
Recommended from our members
Reply
Reply to a comment on the article: Conditional Exceedance Probabilities. Monthly Weather Review 135 (2010), 363â372 (available in Academic Commons at http://dx.doi.org/10.7916/D8PK0G2S)
Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review
Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual
participant data. For continuous outcomes, especially those with naturally skewed distributions, summary
information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal,
we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis.
Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with
missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane
Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited
reference searching and emailed topic experts to identify recent methodological developments. Details recorded
included the description of the method, the information required to implement the method, any underlying
assumptions and whether the method could be readily applied in standard statistical software. We provided a
summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios.
Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in
addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis
level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical
approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following
screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and
three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when
replacing a missing SD the approximation using the range minimised loss of precision and generally performed better
than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile
performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials
gave superior results.
Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median)
reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or
variability summary statistics within meta-analyses
You Want Me to Do What? Teach a Studio Class to Seventy Students?
Amidst widespread recognition of the need to enhance the student experience, built environment educators are facing increased pressure on their time and resources for teaching. Studio-based education, in which students apply ideas to a real site, has been seen as key to a well-rounded education in the built environment and planning professions. At the same time, traditional methods require a high degree of tutor time to be spent with students, which is increasingly impractical given resource constraints and increased class sizes. Drawing on research exploring the challenges posed by sustainable development and participatory processes in ecological planning, a core second year studio-based module at The University of Manchester was re-designed so as to meet these challenges. Key elements of the redesign include: use of the hands-on toolkit, Ketso, for creative thinking and synthesis of ideas within and across groups; mapping and layered spatial analysis; simulating aspects of community consultation, without directly contacting the community; effective use of Graduate Teaching Assistant time in giving feedback and assistance to students; and including an individual reflective learning journal as part of the assessment. The innovations trialled in this module enable an interactive studio experience with a high degree of feedback to be created for large classes. Feedback from students has been very positive. The innovations in the module re-design described in this paper jointly won the 2011 Excellence in Teaching Prize of the Association of European Schools of Planning (AESOP)
Evaluating Data Assimilation Algorithms
Data assimilation leads naturally to a Bayesian formulation in which the
posterior probability distribution of the system state, given the observations,
plays a central conceptual role. The aim of this paper is to use this Bayesian
posterior probability distribution as a gold standard against which to evaluate
various commonly used data assimilation algorithms.
A key aspect of geophysical data assimilation is the high dimensionality and
low predictability of the computational model. With this in mind, yet with the
goal of allowing an explicit and accurate computation of the posterior
distribution, we study the 2D Navier-Stokes equations in a periodic geometry.
We compute the posterior probability distribution by state-of-the-art
statistical sampling techniques. The commonly used algorithms that we evaluate
against this accurate gold standard, as quantified by comparing the relative
error in reproducing its moments, are 4DVAR and a variety of sequential
filtering approximations based on 3DVAR and on extended and ensemble Kalman
filters.
The primary conclusions are that: (i) with appropriate parameter choices,
approximate filters can perform well in reproducing the mean of the desired
probability distribution; (ii) however they typically perform poorly when
attempting to reproduce the covariance; (iii) this poor performance is
compounded by the need to modify the covariance, in order to induce stability.
Thus, whilst filters can be a useful tool in predicting mean behavior, they
should be viewed with caution as predictors of uncertainty. These conclusions
are intrinsic to the algorithms and will not change if the model complexity is
increased, for example by employing a smaller viscosity, or by using a detailed
NWP model
Gravitational collapse with tachyon field and barotropic fluid
A particular class of space-time, with a tachyon field, \phi, and a
barotropic fluid constituting the matter content, is considered herein as a
model for gravitational collapse. For simplicity, the tachyon potential is
assumed to be of inverse square form i.e., V(\phi) \sim \phi^{-2}. Our purpose,
by making use of the specific kinematical features of the tachyon, which are
rather different from a standard scalar field, is to establish the several
types of asymptotic behavior that our matter content induces. Employing a
dynamical system analysis, complemented by a thorough numerical study, we find
classical solutions corresponding to a naked singularity or a black hole
formation. In particular, there is a subset where the fluid and tachyon
participate in an interesting tracking behaviour, depending sensitively on the
initial conditions for the energy densities of the tachyon field and barotropic
fluid. Two other classes of solutions are present, corresponding respectively,
to either a tachyon or a barotropic fluid regime. Which of these emerges as
dominant, will depend on the choice of the barotropic parameter, \gamma.
Furthermore, these collapsing scenarios both have as final state the formation
of a black hole.Comment: 18 pages, 7 figures. v3: minor changes. Final version to appear in
GR
- âŠ