216 research outputs found
Interpolatory methods for model reduction of multi-input/multi-output systems
We develop here a computationally effective approach for producing
high-quality -approximations to large scale linear
dynamical systems having multiple inputs and multiple outputs (MIMO). We extend
an approach for model reduction introduced by Flagg,
Beattie, and Gugercin for the single-input/single-output (SISO) setting, which
combined ideas originating in interpolatory -optimal model
reduction with complex Chebyshev approximation. Retaining this framework, our
approach to the MIMO problem has its principal computational cost dominated by
(sparse) linear solves, and so it can remain an effective strategy in many
large-scale settings. We are able to avoid computationally demanding
norm calculations that are normally required to monitor
progress within each optimization cycle through the use of "data-driven"
rational approximations that are built upon previously computed function
samples. Numerical examples are included that illustrate our approach. We
produce high fidelity reduced models having consistently better
performance than models produced via balanced truncation;
these models often are as good as (and occasionally better than) models
produced using optimal Hankel norm approximation as well. In all cases
considered, the method described here produces reduced models at far lower cost
than is possible with either balanced truncation or optimal Hankel norm
approximation
Fine structure of excitons in CuO
Three experimental observations on 1s-excitons in CuO are not consistent
with the picture of the exciton as a simple hydrogenic bound state: the
energies of the 1s-excitons deviate from the Rydberg formula, the total exciton
mass exceeds the sum of the electron and hole effective masses, and the
triplet-state excitons lie above the singlet. Incorporating the band structure
of the material, we calculate the corrections to this simple picture arising
from the fact that the exciton Bohr radius is comparable to the lattice
constant. By means of a self-consistent variational calculation of the total
exciton mass as well as the ground-state energy of the singlet and the
triplet-state excitons, we find excellent agreement with experiment.Comment: Revised abstract; 10 pages, revtex, 3 figures available from G.
Kavoulakis, Physics Department, University of Illinois, Urban
Auger decay of degenerate and Bose-condensed excitons in CuO
We study the non-radiative Auger decay of excitons in CuO, in which two
excitons scatter to an excited electron and hole. The exciton decay rate for
the direct and the phonon-assisted processes is calculated from first
principles; incorporating the band structure of the material leads to a
relatively shorter lifetime of the triplet state ortho excitons. We compare our
results with the Auger decay rate extracted from data on highly degenerate
triplet excitons and Bose-condensed singlet excitons in CuO.Comment: 15 pages, revtex, figures available from G. Kavoulaki
Considering discrepancy when calibrating a mechanistic electrophysiology model
Uncertainty quantification (UQ) is a vital step in using mathematical models and simulations to take decisions. The field of cardiac simulation has begun to explore and adopt UQ methods to characterize uncertainty in model inputs and how that propagates through to outputs or predictions; examples of this can be seen in the papers of this issue. In this review and perspective piece, we draw attention to an important and under-addressed source of uncertainty in our predictionsâthat of uncertainty in the model structure or the equations themselves. The difference between imperfect models and reality is termed model discrepancy, and we are often uncertain as to the size and consequences of this discrepancy. Here, we provide two examples of the consequences of discrepancy when calibrating models at the ion channel and action potential scales. Furthermore, we attempt to account for this discrepancy when calibrating and validating an ion channel model using different methods, based on modelling the discrepancy using Gaussian processes and autoregressive-moving-average models, then highlight the advantages and shortcomings of each approach. Finally, suggestions and lines of enquiry for future work are provided.
This article is part of the theme issue âUncertainty quantification in cardiac and cardiovascular modelling and simulationâ
Mapping the Two-Component Atomic Fermi Gas to the Nuclear Shell-Model
The physics of a two-component cold fermi gas is now frequently addressed in
laboratories. Usually this is done for large samples of tens to hundreds of
thousands of particles. However, it is now possible to produce few-body systems
(1-100 particles) in very tight traps where the shell structure of the external
potential becomes important. A system of two-species fermionic cold atoms with
an attractive zero-range interaction is analogous to a simple model of nucleus
in which neutrons and protons interact only through a residual pairing
interaction. In this article, we discuss how the problem of a two-component
atomic fermi gas in a tight external trap can be mapped to the nuclear shell
model so that readily available many-body techniques in nuclear physics, such
as the Shell Model Monte Carlo (SMMC) method, can be directly applied to the
study of these systems. We demonstrate an application of the SMMC method by
estimating the pairing correlations in a small two-component Fermi system with
moderate-to-strong short-range two-body interactions in a three-dimensional
harmonic external trapping potential.Comment: 13 pages, 3 figures. Final versio
On the selection of AGN neutrino source candidates for a source stacking analysis with neutrino telescopes
The sensitivity of a search for sources of TeV neutrinos can be improved by
grouping potential sources together into generic classes in a procedure that is
known as source stacking. In this paper, we define catalogs of Active Galactic
Nuclei (AGN) and use them to perform a source stacking analysis. The grouping
of AGN into classes is done in two steps: first, AGN classes are defined, then,
sources to be stacked are selected assuming that a potential neutrino flux is
linearly correlated with the photon luminosity in a certain energy band (radio,
IR, optical, keV, GeV, TeV). Lacking any secure detailed knowledge on neutrino
production in AGN, this correlation is motivated by hadronic AGN models, as
briefly reviewed in this paper.
The source stacking search for neutrinos from generic AGN classes is
illustrated using the data collected by the AMANDA-II high energy neutrino
detector during the year 2000. No significant excess for any of the suggested
groups was found.Comment: 43 pages, 12 figures, accepted by Astroparticle Physic
All-particle cosmic ray energy spectrum measured with 26 IceTop stations
We report on a measurement of the cosmic ray energy spectrum with the IceTop
air shower array, the surface component of the IceCube Neutrino Observatory at
the South Pole. The data used in this analysis were taken between June and
October, 2007, with 26 surface stations operational at that time, corresponding
to about one third of the final array. The fiducial area used in this analysis
was 0.122 km^2. The analysis investigated the energy spectrum from 1 to 100 PeV
measured for three different zenith angle ranges between 0{\deg} and 46{\deg}.
Because of the isotropy of cosmic rays in this energy range the spectra from
all zenith angle intervals have to agree. The cosmic-ray energy spectrum was
determined under different assumptions on the primary mass composition. Good
agreement of spectra in the three zenith angle ranges was found for the
assumption of pure proton and a simple two-component model. For zenith angles
{\theta} < 30{\deg}, where the mass dependence is smallest, the knee in the
cosmic ray energy spectrum was observed between 3.5 and 4.32 PeV, depending on
composition assumption. Spectral indices above the knee range from -3.08 to
-3.11 depending on primary mass composition assumption. Moreover, an indication
of a flattening of the spectrum above 22 PeV were observed.Comment: 38 pages, 17 figure
An improved method for measuring muon energy using the truncated mean of dE/dx
The measurement of muon energy is critical for many analyses in large
Cherenkov detectors, particularly those that involve separating
extraterrestrial neutrinos from the atmospheric neutrino background. Muon
energy has traditionally been determined by measuring the specific energy loss
(dE/dx) along the muon's path and relating the dE/dx to the muon energy.
Because high-energy muons (E_mu > 1 TeV) lose energy randomly, the spread in
dE/dx values is quite large, leading to a typical energy resolution of 0.29 in
log10(E_mu) for a muon observed over a 1 km path length in the IceCube
detector. In this paper, we present an improved method that uses a truncated
mean and other techniques to determine the muon energy. The muon track is
divided into separate segments with individual dE/dx values. The elimination of
segments with the highest dE/dx results in an overall dE/dx that is more
closely correlated to the muon energy. This method results in an energy
resolution of 0.22 in log10(E_mu), which gives a 26% improvement. This
technique is applicable to any large water or ice detector and potentially to
large scintillator or liquid argon detectors.Comment: 12 pages, 16 figure
Human Embryonic Stem Cell Technology: Large Scale Cell Amplification and Differentiation
Embryonic stem cells (ESC) hold the promise of overcoming many diseases as potential sources of, for example, dopaminergic neural cells for Parkinsonâs Disease to pancreatic islets to relieve diabetic patients of their daily insulin injections. While an embryo has the innate capacity to develop fully functional differentiated tissues; biologists are finding that it is much more complex to derive singular, pure populations of primary cells from the highly versatile ESC from this embryonic parent. Thus, a substantial investment in developing the technologies to expand and differentiate these cells is required in the next decade to move this promise into reality. In this review we document the current standard assays for characterising human ESC (hESC), the status of âdefinedâ feeder-free culture conditions for undifferentiated hESC growth, examine the quality controls that will be required to be established for monitoring their growth, review current methods for expansion and differentiation, and speculate on the possible routes of scaling up the differentiation of hESC to therapeutic quantities
The performance of the jet trigger for the ATLAS detector during 2011 data taking
The performance of the jet trigger for the ATLAS detector at the LHC during the 2011 data taking period is described. During 2011 the LHC provided protonâproton collisions with a centre-of-mass energy of 7 TeV and heavy ion collisions with a 2.76 TeV per nucleonânucleon collision energy. The ATLAS trigger is a three level system designed to reduce the rate of events from the 40 MHz nominal maximum bunch crossing rate to the approximate 400 Hz which can be written to offline storage. The ATLAS jet trigger is the primary means for the online selection of events containing jets. Events are accepted by the trigger if they contain one or more jets above some transverse energy threshold. During 2011 data taking the jet trigger was fully efficient for jets with transverse energy above 25 GeV for triggers seeded randomly at Level 1. For triggers which require a jet to be identified at each of the three trigger levels, full efficiency is reached for offline jets with transverse energy above 60 GeV. Jets reconstructed in the final trigger level and corresponding to offline jets with transverse energy greater than 60 GeV, are reconstructed with a resolution in transverse energy with respect to offline jets, of better than 4 % in the central region and better than 2.5 % in the forward direction
- âŠ