22,576 research outputs found
Effect of Statistical Fluctuation in Monte Carlo Based Photon Beam Dose Calculation on Gamma Index Evaluation
The gamma-index test has been commonly adopted to quantify the degree of
agreement between a reference dose distribution and an evaluation dose
distribution. Monte Carlo (MC) simulation has been widely used for the
radiotherapy dose calculation for both clinical and research purposes. The goal
of this work is to investigate both theoretically and experimentally the impact
of the MC statistical fluctuation on the gamma-index test when the fluctuation
exists in the reference, the evaluation, or both dose distributions. To the
first order approximation, we theoretically demonstrated in a simplified model
that the statistical fluctuation tends to overestimate gamma-index values when
existing in the reference dose distribution and underestimate gamma-index
values when existing in the evaluation dose distribution given the original
gamma-index is relatively large for the statistical fluctuation. Our numerical
experiments using clinical photon radiation therapy cases have shown that 1)
when performing a gamma-index test between an MC reference dose and a non-MC
evaluation dose, the average gamma-index is overestimated and the passing rate
decreases with the increase of the noise level in the reference dose; 2) when
performing a gamma-index test between a non-MC reference dose and an MC
evaluation dose, the average gamma-index is underestimated when they are within
the clinically relevant range and the passing rate increases with the increase
of the noise level in the evaluation dose; 3) when performing a gamma-index
test between an MC reference dose and an MC evaluation dose, the passing rate
is overestimated due to the noise in the evaluation dose and underestimated due
to the noise in the reference dose. We conclude that the gamma-index test
should be used with caution when comparing dose distributions computed with
Monte Carlo simulation
Efficient electronic entanglement concentration assisted with single mobile electron
We present an efficient entanglement concentration protocol (ECP) for mobile
electrons with charge detection. This protocol is quite different from other
ECPs for one can obtain a maximally entangled pair from a pair of
less-entangled state and a single mobile electron with a certain probability.
With the help of charge detection, it can be repeated to reach a higher success
probability. It also does not need to know the coefficient of the original
less-entangled states. All these advantages may make this protocol useful in
current distributed quantum information processing.Comment: 6pages, 3figure
Fast Monte Carlo Simulation for Patient-specific CT/CBCT Imaging Dose Calculation
Recently, X-ray imaging dose from computed tomography (CT) or cone beam CT
(CBCT) scans has become a serious concern. Patient-specific imaging dose
calculation has been proposed for the purpose of dose management. While Monte
Carlo (MC) dose calculation can be quite accurate for this purpose, it suffers
from low computational efficiency. In response to this problem, we have
successfully developed a MC dose calculation package, gCTD, on GPU architecture
under the NVIDIA CUDA platform for fast and accurate estimation of the x-ray
imaging dose received by a patient during a CT or CBCT scan. Techniques have
been developed particularly for the GPU architecture to achieve high
computational efficiency. Dose calculations using CBCT scanning geometry in a
homogeneous water phantom and a heterogeneous Zubal head phantom have shown
good agreement between gCTD and EGSnrc, indicating the accuracy of our code. In
terms of improved efficiency, it is found that gCTD attains a speed-up of ~400
times in the homogeneous water phantom and ~76.6 times in the Zubal phantom
compared to EGSnrc. As for absolute computation time, imaging dose calculation
for the Zubal phantom can be accomplished in ~17 sec with the average relative
standard deviation of 0.4%. Though our gCTD code has been developed and tested
in the context of CBCT scans, with simple modification of geometry it can be
used for assessing imaging dose in CT scans as well.Comment: 18 pages, 7 figures, and 1 tabl
A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation
Targeting at the development of an accurate and efficient dose calculation
engine for online adaptive radiotherapy, we have implemented a finite size
pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This
new GPU-based dose engine is built on our previously published ultrafast FSPB
computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009].
Dosimetric evaluations against Monte Carlo dose calculations are conducted on
10 IMRT treatment plans (5 head-and-neck cases and 5 lung cases). For all
cases, there is improvement with the 3D-density correction over the
conventional FSPB algorithm and for most cases the improvement is significant.
Regarding the efficiency, because of the appropriate arrangement of memory
access and the usage of GPU intrinsic functions, the dose calculation for an
IMRT plan can be accomplished well within 1 second (except for one case) with
this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB
algorithm without 3D-density correction, this new algorithm, though slightly
sacrificing the computational efficiency (~5-15% lower), has significantly
improved the dose calculation accuracy, making it more suitable for online IMRT
replanning
Delay-dependent robust stability of stochastic delay systems with Markovian switching
In recent years, stability of hybrid stochastic delay systems, one of the important issues in the study of stochastic systems, has received considerable attention. However, the existing results do not deal with the structure of the diffusion but estimate its upper bound, which induces conservatism. This paper studies delay-dependent robust stability of hybrid stochastic delay systems. A delay-dependent criterion for robust exponential stability of hybrid stochastic delay systems is presented in terms of linear matrix inequalities (LMIs), which exploits the structure of the diffusion. Numerical examples are given to verify the effectiveness and less conservativeness of the proposed method
Relaxing the Irrevocability Requirement for Online Graph Algorithms
Online graph problems are considered in models where the irrevocability
requirement is relaxed. Motivated by practical examples where, for example,
there is a cost associated with building a facility and no extra cost
associated with doing it later, we consider the Late Accept model, where a
request can be accepted at a later point, but any acceptance is irrevocable.
Similarly, we also consider a Late Reject model, where an accepted request can
later be rejected, but any rejection is irrevocable (this is sometimes called
preemption). Finally, we consider the Late Accept/Reject model, where late
accepts and rejects are both allowed, but any late reject is irrevocable. For
Independent Set, the Late Accept/Reject model is necessary to obtain a constant
competitive ratio, but for Vertex Cover the Late Accept model is sufficient and
for Minimum Spanning Forest the Late Reject model is sufficient. The Matching
problem has a competitive ratio of 2, but in the Late Accept/Reject model, its
competitive ratio is 3/2
Interdimensional degeneracies for a quantum three-body system in D dimensions
A new approach is developed to derive the complete spectrum of exact interdimensional degeneracies for a quantum three-body system in D-dimensions. The new method gives a generalization of previous methods
Variability-selected low-luminosity active galactic nuclei candidates in the 7 Ms Chandra Deep Field-South
In deep X-ray surveys, active galactic nuclei (AGNs) with a broad range of
luminosities have been identified. However, cosmologically distant
low-luminosity AGN (LLAGN, erg s)
identification still poses a challenge due to significant contamination from
host galaxies. Based on the 7 Ms Chandra Deep Field-South (CDF-S) survey, the
longest timescale ( years) deep X-ray survey to date, we utilize an
X-ray variability selection technique to search for LLAGNs that remain
unidentified among the CDF-S X-ray sources. We find 13 variable sources from
110 unclassified CDF-S X-ray sources. Except for one source which could be an
ultraluminous X-ray source, the variability of the remaining 12 sources is most
likely due to accreting supermassive black holes. These 12 AGN candidates have
low intrinsic X-ray luminosities, with a median value of erg
s. They are generally not heavily obscured, with an average effective
power-law photon index of 1.8. The fraction of variable AGNs in the CDF-S is
independent of X-ray luminosity and is only restricted by the total number of
observed net counts, confirming previous findings that X-ray variability is a
near-ubiquitous property of AGNs over a wide range of luminosities. There is an
anti-correlation between X-ray luminosity and variability amplitude for
high-luminosity AGNs, but as the luminosity drops to erg
s, the variability amplitude no longer appears dependent on the
luminosity. The entire observed luminosity-variability trend can be roughly
reproduced by an empirical AGN variability model based on a broken power-law
power spectral density function.Comment: 18 pages, 11 figures, accepted for publication in Ap
Observation of ferromagnetism above 900 K in Cr-GaN and Cr-AlN
We report the observation of ferromagnetism at over 900K in Cr-GaN and Cr-AlN
thin films. The saturation magnetization moments in our best films of Cr-GaN
and Cr-AlN at low temperatures are 0.42 and 0.6 u_B/Cr atom, respectively,
indicating that 14% and 20%, of the Cr atoms, respectively, are magnetically
active. While Cr-AlN is highly resistive, Cr-GaN exhibits thermally activated
conduction that follows the exponential law expected for variable range hopping
between localized states. Hall measurements on a Cr-GaN sample indicate a
mobility of 0.06 cm^2/V.s, which falls in the range characteristic of hopping
conduction, and a free carrier density (1.4E20/cm^3), which is similar in
magnitude to the measured magnetically-active Cr concentration (4.9E19/cm^3). A
large negative magnetoresistance is attributed to scattering from loose spins
associated with non-ferromagnetic impurities. The results indicate that
ferromagnetism in Cr-GaN and Cr-AlN can be attributed to the double exchange
mechanism as a result of hopping between near-midgap substitutional Cr impurity
bands.Comment: 14 pages, 4 figures, submitted to AP
Vector field processing on triangle meshes
While scalar fields on surfaces have been staples of geometry processing, the use of tangent vector fields has steadily grown in geometry processing over the last two decades: they are crucial to encoding directions and sizing on surfaces as commonly required in tasks such as texture synthesis, non-photorealistic rendering, digital grooming, and meshing. There are, however, a variety of discrete representations of tangent vector fields on triangle meshes, and each approach offers different tradeoffs among simplicity, efficiency, and accuracy depending on the targeted application.
This course reviews the three main families of discretizations used to design computational tools for vector field processing on triangle meshes: face-based, edge-based, and vertex-based representations. In the process of reviewing the computational tools offered by these representations, we go over a large body of recent developments in vector field processing in the area of discrete differential geometry. We also discuss the theoretical and practical limitations of each type of discretization, and cover increasingly-common extensions such as n-direction and n-vector fields.
While the course will focus on explaining the key approaches to practical encoding (including data structures) and manipulation (including discrete operators) of finite-dimensional vector fields, important differential geometric notions will also be covered: as often in Discrete Differential Geometry, the discrete picture will be used to illustrate deep continuous concepts such as covariant derivatives, metric connections, or Bochner Laplacians
- …
