483 research outputs found
Recommended from our members
Impact of incomplete ventricular coverage on diagnostic performance of myocardial perfusion imaging.
In the context of myocardial perfusion imaging (MPI) with cardiac magnetic resonance (CMR), there is ongoing debate on the merits of using technically complex acquisition methods to achieve whole-heart spatial coverage, rather than conventional 3-slice acquisition. An adequately powered comparative study is difficult to achieve given the requirement for two separate stress CMR studies in each patient. The aim of this work is to draw relevant conclusions from SPECT MPI by comparing whole-heart versus simulated 3-slice coverage in a large existing dataset. SPECT data from 651 patients with suspected coronary artery disease who underwent invasive angiography were analyzed. A computational approach was designed to model 3-slice MPI by retrospective subsampling of whole- heart data. For both whole-heart and 3-slice approaches, the diagnostic performance and the stress total perfusion deficit (TPD) score-a measure of ischemia extent/severity-were quantified and compared. Diagnostic accuracy for the 3-slice and whole-heart approaches were similar (area under the curve: 0.843 vs. 0.855, respectively; P = 0.07). The majority (54%) of cases missed by 3-slice imaging had primarily apical ischemia. Whole-heart and 3-slice TPD scores were strongly correlated (R2 = 0.93, P < 0.001) but 3-slice TPD showed a small yet significant bias compared to whole-heart TPD (- 1.19%; P < 0.0001) and the 95% limits of agreement were relatively wide (- 6.65% to 4.27%). Incomplete ventricular coverage typically acquired in 3-slice CMR MPI does not significantly affect the diagnostic accuracy. However, 3-slice MPI may fail to detect severe apical ischemia and underestimate the extent/severity of perfusion defects. Our results suggest that caution is required when comparing the ischemic burden between 3-slice and whole-heart datasets, and corroborate the need to establish prognostic thresholds specific to each approach
Adenomatoid odontogenic tumor with impacted mandibular canine: a case report
The Adenomatoid Odontogenic Tumor (AOT) is a rare, slow growing, benign, odontogenic epithelial tumor with
characteristic clinical and histological features; which usually arise in the second or third decade. It is a tumor
composed of odontogenic epithelium in a variety of histoarchitectural patterns which are embedded in a mature
connective tissue stroma. It is mostly encountered in young patients with a greater predilection for females. Maxilla
is the predilection site of occurrence, most commonly associated with an unerupted maxillary canine. It presents
as a symptom-free lesion and is frequently discovered during routine radiographic examination. This case report
describes an unusual case of 20 year old male with only a one month history of tumor in the anterior mandible. The
tumor was a well circumscribed intraosseous lesion with an embedded tooth. Histological evidence of calcification
was present. The present case lends support to the categorization of AOT as a mixed odontogenic tumo
Derandomized Construction of Combinatorial Batch Codes
Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes
introduced by Ishai et al. in STOC 2004, abstracts the following data
distribution problem: data items are to be replicated among servers in
such a way that any of the data items can be retrieved by reading at
most one item from each server with the total amount of storage over
servers restricted to . Given parameters and , where and
are constants, one of the challenging problems is to construct -uniform CBCs
(CBCs where each data item is replicated among exactly servers) which
maximizes the value of . In this work, we present explicit construction of
-uniform CBCs with data items. The
construction has the property that the servers are almost regular, i.e., number
of data items stored in each server is in the range . The
construction is obtained through better analysis and derandomization of the
randomized construction presented by Ishai et al. Analysis reveals almost
regularity of the servers, an aspect that so far has not been addressed in the
literature. The derandomization leads to explicit construction for a wide range
of values of (for given and ) where no other explicit construction
with similar parameters, i.e., with , is
known. Finally, we discuss possibility of parallel derandomization of the
construction
Algorithms for outerplanar graph roots and graph roots of pathwidth at most 2
Deciding whether a given graph has a square root is a classical problem that
has been studied extensively both from graph theoretic and from algorithmic
perspectives. The problem is NP-complete in general, and consequently
substantial effort has been dedicated to deciding whether a given graph has a
square root that belongs to a particular graph class. There are both
polynomial-time solvable and NP-complete cases, depending on the graph class.
We contribute with new results in this direction. Given an arbitrary input
graph G, we give polynomial-time algorithms to decide whether G has an
outerplanar square root, and whether G has a square root that is of pathwidth
at most 2
Decision and function problems based on boson sampling
Boson sampling is a mathematical problem that is strongly believed to be
intractable for classical computers, whereas passive linear interferometers can
produce samples efficiently. So far, the problem remains a computational
curiosity, and the possible usefulness of boson-sampling devices is mainly
limited to the proof of quantum supremacy. The purpose of this work is to
investigate whether boson sampling can be used as a resource of decision and
function problems that are computationally hard, and may thus have
cryptographic applications. After the definition of a rather general
theoretical framework for the design of such problems, we discuss their
solution by means of a brute-force numerical approach, as well as by means of
non-boson samplers. Moreover, we estimate the sample sizes required for their
solution by passive linear interferometers, and it is shown that they are
independent of the size of the Hilbert space.Comment: Close to the version published in PR
Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP
2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p∈N , a family of L p instances on which 2-Opt can take an exponential number of steps.
Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by O~(n10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of O~(n4+1/3⋅ϕ8/3) . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to O~(n4+1/3−1/d⋅ϕ8/3) . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by O~(n4−1/d⋅ϕ) . In addition, we prove an upper bound of O(ϕ√d) on the expected approximation factor with respect to all L p metrics.
Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∼1/σ d
First-Hitting Times Under Additive Drift
For the last ten years, almost every theoretical result concerning the
expected run time of a randomized search heuristic used drift theory, making it
the arguably most important tool in this domain. Its success is due to its ease
of use and its powerful result: drift theory allows the user to derive bounds
on the expected first-hitting time of a random process by bounding expected
local changes of the process -- the drift. This is usually far easier than
bounding the expected first-hitting time directly.
Due to the widespread use of drift theory, it is of utmost importance to have
the best drift theorems possible. We improve the fundamental additive,
multiplicative, and variable drift theorems by stating them in a form as
general as possible and providing examples of why the restrictions we keep are
still necessary. Our additive drift theorem for upper bounds only requires the
process to be nonnegative, that is, we remove unnecessary restrictions like a
finite, discrete, or bounded search space. As corollaries, the same is true for
our upper bounds in the case of variable and multiplicative drift
Quantum Portfolios
Quantum computation holds promise for the solution of many intractable
problems. However, since many quantum algorithms are stochastic in nature they
can only find the solution of hard problems probabilistically. Thus the
efficiency of the algorithms has to be characterized both by the expected time
to completion {\it and} the associated variance. In order to minimize both the
running time and its uncertainty, we show that portfolios of quantum algorithms
analogous to those of finance can outperform single algorithms when applied to
the NP-complete problems such as 3-SAT.Comment: revision includes additional data and corrects minor typo
Minimizing energy below the glass thresholds
Focusing on the optimization version of the random K-satisfiability problem,
the MAX-K-SAT problem, we study the performance of the finite energy version of
the Survey Propagation (SP) algorithm. We show that a simple (linear time)
backtrack decimation strategy is sufficient to reach configurations well below
the lower bound for the dynamic threshold energy and very close to the analytic
prediction for the optimal ground states. A comparative numerical study on one
of the most efficient local search procedures is also given.Comment: 12 pages, submitted to Phys. Rev. E, accepted for publicatio
- …