1,767 research outputs found
Dynamic Operational Planning in Warfare: A Stochastic Game Approach to Military Campaigns
We study a two-player discounted zero-sum stochastic game model for dynamic
operational planning in military campaigns. At each stage, the players manage
multiple commanders who order military actions on objectives that have an open
line of control. When a battle over the control of an objective occurs, its
stochastic outcome depends on the actions and the enabling support provided by
the control of other objectives. Each player aims to maximize the cumulative
number of objectives they control, weighted by their criticality. To solve this
large-scale stochastic game, we derive properties of its Markov perfect
equilibria by leveraging the logistics and military operational command and
control structure. We show the consequential isotonicity of the optimal value
function with respect to the partially ordered state space, which in turn leads
to a significant reduction of the state and action spaces. We also accelerate
Shapley's value iteration algorithm by eliminating dominated actions and
investigating pure equilibria of the matrix game solved at each iteration. We
demonstrate the computational value of our equilibrium results on a case study
that reflects representative operational-level military campaigns with
geopolitical implications. Our analysis reveals a complex interplay between the
game's parameters and dynamics in equilibrium, resulting in new military
insights for campaign analysts
A fluorophore attached to nicotinic acetylcholine receptor beta M2 detects productive binding of agonist to the alpha delta site
To study conformational transitions at the muscle nicotinic acetylcholine (ACh) receptor (nAChR), a rhodamine fluorophore was tethered to a Cys side chain introduced at the beta-19' position in the M2 region of the nAChR expressed in Xenopus oocytes. This procedure led to only minor changes in receptor function. During agonist application, fluorescence increased by (Delta-F/F) approximate to 10%, and the emission peak shifted to lower wavelengths, indicating a more hydrophobic environment for the fluorophore. The dose-response relations for Delta-F agreed well with those for epibatidine-induced currents, but were shifted approximate to 100-fold to the left of those for ACh-induced currents. Because (i) epibatidine binds more tightly to the alpha-gamma-binding site than to the alpha-delta site and (ii) ACh binds with reverse-site selectivity, these data suggest that Delta-F monitors an event linked to binding specifically at the alpha-delta-subunit interface. In experiments with flash-applied agonists, the earliest detectable Delta-F occurs within milliseconds, i.e., during activation. At low [ACh] (less than or equal to 10 muM), a phase of Delta-F occurs with the same time constant as desensitization, presumably monitoring an increased population of agonist-bound receptors. However, recovery from Delta-F is complete before the slowest phase of recovery from desensitization (time constant approximate to 250 s), showing that one or more desensitized states have fluorescence like that of the resting channel. That conformational transitions at the alpha-delta-binding site are not tightly coupled to channel activation suggests that sequential rather than fully concerted transitions occur during receptor gating. Thus, time-resolved fluorescence changes provide a powerful probe of nAChR conformational changes
Predicting Fracture in the Proximal Humerus using Phase Field Models
Proximal humerus impacted fractures are of clinical concern in the elderly
population. Prediction of such fractures by CT-based finite element methods
encounters several major obstacles such as heterogeneous mechanical properties
and fracture due to compressive strains. We herein propose to investigate a
variation of the phase field method (PFM) embedded into the finite cell method
(FCM) to simulate impacted humeral fractures in fresh frozen human humeri. The
force-strain response, failure loads and the fracture path are compared to
experimental observations for validation purposes. The PFM (by means of the
regularization parameter ) is first calibrated by one experiment and
thereafter used for the prediction of the mechanical response of two other
human fresh frozen humeri. All humeri are fractured at the surgical neck and
strains are monitored by Digital Image Correlation (DIC). Experimental strains
in the elastic regime are reproduced with good agreement (),
similarly to the validated finite element method [9]. The failure pattern and
fracture evolution at the surgical neck predicted by the PFM mimic extremely
well the experimental observations for all three humeri. The maximum relative
error in the computed failure loads is . To the best of our knowledge
this is the first method that can predict well the experimental compressive
failure pattern as well as the force-strain relationship in proximal humerus
fractures
The solubility–permeability interplay in using cyclodextrins as pharmaceutical solubilizers: Mechanistic modeling and application to progesterone
A quasi-equilibrium mass transport analysis has been developed to quantitatively explain the solubility–permeability interplay that exists when using cyclodextrins as pharmaceutical solubilizers. The model considers the effects of cyclodextrins on the membrane permeability ( P m ) as well as the unstirred water layer (UWL) permeability ( P aq ), to predict the overall effective permeability ( P eff ) dependence on cyclodextrin concentration ( C CD ). The analysis reveals that: (1) UWL permeability markedly increases with increasing C CD since the effective UWL thickness quickly decreases with increasing C CD ; (2) membrane permeability decreases with increasing C CD , as a result of the decrease in the free fraction of drug; and (3) since P aq increases and P m decreases with increasing C CD , the UWL is effectively eliminated and the overall P eff tends toward membrane control, that is, P eff ≈ P m above a critical C CD . Application of this transport model enabled excellent quantitative prediction of progesterone P eff as a function of HPΒCD concentrations in PAMPA assay, Caco-2 transepithelial studies, and in situ rat jejunal-perfusion model. This work demonstrates that when using cyclodextrins as pharmaceutical solubilizers, a trade-off exists between solubility increase and permeability decrease that must not be overlooked; the transport model presented here can aid in striking the appropriate solubility–permeability balance in order to achieve optimal overall absorption. © 2009 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 99: 2739–2749, 2010Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71376/1/22033_ftp.pd
Entropies of the EEG: The effects of general anaesthesia
The aim of this paper was to compare the performance of different entropy estimators when applied to EEG data taken from patients during routine induction of general anesthesia. The question then arose as to how and why different EEG patterns could affect the different estimators. Therefore we also compared how the different entropy estimators responded to artificially generated signals with predetermined, known, characteristics. This was done by applying the entropy algorithms to pseudoEEG data:
(1) computer-generated using a second-order autoregressive (AR2) model,
(2) computer-generated white noise added to step signals simulating blink and eyemovement artifacts and,
(3) seeing the effect of exogenous (computer-generated) sine-wave oscillations added to the actual clinically-derived EEG data set from patients undergoing induction of anesthesia
Second harmonic generating (SHG) nanoprobes for in vivo imaging
Fluorescence microscopy has profoundly changed cell and molecular biology studies by permitting tagged gene products to be followed as they function and interact. The ability of a fluorescent dye to absorb and emit light of different wavelengths allows it to generate startling contrast that, in the best cases, can permit single molecule detection and tracking. However, in many experimental settings, fluorescent probes fall short of their potential due to dye bleaching, dye signal saturation, and tissue autofluorescence. Here, we demonstrate that second harmonic generating (SHG) nanoprobes can be used for in vivo imaging, circumventing many of the limitations of classical fluorescence probes. Under intense illumination, such as at the focus of a laser-scanning microscope, these SHG nanocrystals convert two photons into one photon of half the wavelength; thus, when imaged by conventional two-photon microscopy, SHG nanoprobes appear to generate a signal with an inverse Stokes shift like a fluorescent dye, but with a narrower emission. Unlike commonly used fluorescent probes, SHG nanoprobes neither bleach nor blink, and the signal they generate does not saturate with increasing illumination intensity. The resulting contrast and detectability of SHG nanoprobes provide unique advantages for molecular imaging of living cells and tissues
Ketamine Pharmacokinetics: A Systematic Review of the Literature, Meta-analysis, and Population Analysis
Background: Several models describing the pharmacokinetics of ketamine are published with differences in model structure and complexity. A systematic review of the literature was performed, as well as a meta-analysis of pharmacokinetic data and construction of a pharmacokinetic model from raw data sets to qualitatively and quantitatively evaluate existing ketamine pharmacokinetic models and construct a general ketamine pharmacokinetic model.Methods: Extracted pharmacokinetic parameters from the literature (volume of distribution and clearance) were standardized to allow comparison among studies. A meta-analysis was performed on studies that performed a mixed-effect analysis to calculate weighted mean parameter values and a meta-regression analysis to determine the influence of covariates on parameter values. A pharmacokinetic population model derived from a subset of raw data sets was constructed and compared with the meta-analytical analysis.Results: The meta-analysis was performed on 18 studies (11 conducted in healthy adults, 3 in adult patients, and 5 in pediatric patients). Weighted mean volume of distribution was 252 l/70 kg (95% CI, 200 to 304 l/70 kg). Weighted mean clearance was 79 l/h (at 70 kg; 95% CI, 69 to 90 l/h at 70 kg). No effect of covariates was observed; simulations showed that models based on venous sampling showed substantially higher context-sensitive half-times than those based on arterial sampling. The pharmacokinetic model created from 14 raw data sets consisted of one central arterial compartment with two peripheral compartments linked to two venous delay compartments. Simulations showed that the output of the raw data pharmacokinetic analysis and the meta-analysis were comparable.Conclusions: A meta-analytical analysis of ketamine pharmacokinetics was successfully completed despite large heterogeneity in study characteristics. Differences in output of the meta-analytical approach and a combined analysis of 14 raw data sets were small, indicative that the meta-analytical approach gives a clinically applicable approximation of ketamine population parameter estimates and may be used when no raw data sets are available.</p
Quantum phase transition of condensed bosons in optical lattices
In this paper we study the superfluid-Mott-insulator phase transition of
ultracold dilute gas of bosonic atoms in an optical lattice by means of Green
function method and Bogliubov transformation as well. The superfluid-
Mott-insulator phase transition condition is determined by the energy-band
structure with an obvious interpretation of the transition mechanism. Moreover
the superfluid phase is explained explicitly from the energy spectrum derived
in terms of Bogliubov approach.Comment: 13 pages, 1 figure
Forecasting in the light of Big Data
Predicting the future state of a system has always been a natural motivation
for science and practical applications. Such a topic, beyond its obvious
technical and societal relevance, is also interesting from a conceptual point
of view. This owes to the fact that forecasting lends itself to two equally
radical, yet opposite methodologies. A reductionist one, based on the first
principles, and the naive inductivist one, based only on data. This latter view
has recently gained some attention in response to the availability of
unprecedented amounts of data and increasingly sophisticated algorithmic
analytic techniques. The purpose of this note is to assess critically the role
of big data in reshaping the key aspects of forecasting and in particular the
claim that bigger data leads to better predictions. Drawing on the
representative example of weather forecasts we argue that this is not generally
the case. We conclude by suggesting that a clever and context-dependent
compromise between modelling and quantitative analysis stands out as the best
forecasting strategy, as anticipated nearly a century ago by Richardson and von
Neumann
- …