483 research outputs found

    Adenomatoid odontogenic tumor with impacted mandibular canine: a case report

    Get PDF
    The Adenomatoid Odontogenic Tumor (AOT) is a rare, slow growing, benign, odontogenic epithelial tumor with characteristic clinical and histological features; which usually arise in the second or third decade. It is a tumor composed of odontogenic epithelium in a variety of histoarchitectural patterns which are embedded in a mature connective tissue stroma. It is mostly encountered in young patients with a greater predilection for females. Maxilla is the predilection site of occurrence, most commonly associated with an unerupted maxillary canine. It presents as a symptom-free lesion and is frequently discovered during routine radiographic examination. This case report describes an unusual case of 20 year old male with only a one month history of tumor in the anterior mandible. The tumor was a well circumscribed intraosseous lesion with an embedded tooth. Histological evidence of calcification was present. The present case lends support to the categorization of AOT as a mixed odontogenic tumo

    Derandomized Construction of Combinatorial Batch Codes

    Full text link
    Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes introduced by Ishai et al. in STOC 2004, abstracts the following data distribution problem: nn data items are to be replicated among mm servers in such a way that any kk of the nn data items can be retrieved by reading at most one item from each server with the total amount of storage over mm servers restricted to NN. Given parameters m,c,m, c, and kk, where cc and kk are constants, one of the challenging problems is to construct cc-uniform CBCs (CBCs where each data item is replicated among exactly cc servers) which maximizes the value of nn. In this work, we present explicit construction of cc-uniform CBCs with Ω(mc1+1k)\Omega(m^{c-1+{1 \over k}}) data items. The construction has the property that the servers are almost regular, i.e., number of data items stored in each server is in the range [ncmn2ln(4m),ncm+n2ln(4m)][{nc \over m}-\sqrt{{n\over 2}\ln (4m)}, {nc \over m}+\sqrt{{n \over 2}\ln (4m)}]. The construction is obtained through better analysis and derandomization of the randomized construction presented by Ishai et al. Analysis reveals almost regularity of the servers, an aspect that so far has not been addressed in the literature. The derandomization leads to explicit construction for a wide range of values of cc (for given mm and kk) where no other explicit construction with similar parameters, i.e., with n=Ω(mc1+1k)n = \Omega(m^{c-1+{1 \over k}}), is known. Finally, we discuss possibility of parallel derandomization of the construction

    Algorithms for outerplanar graph roots and graph roots of pathwidth at most 2

    Full text link
    Deciding whether a given graph has a square root is a classical problem that has been studied extensively both from graph theoretic and from algorithmic perspectives. The problem is NP-complete in general, and consequently substantial effort has been dedicated to deciding whether a given graph has a square root that belongs to a particular graph class. There are both polynomial-time solvable and NP-complete cases, depending on the graph class. We contribute with new results in this direction. Given an arbitrary input graph G, we give polynomial-time algorithms to decide whether G has an outerplanar square root, and whether G has a square root that is of pathwidth at most 2

    Decision and function problems based on boson sampling

    Get PDF
    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of non-boson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.Comment: Close to the version published in PR

    Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP

    Get PDF
    2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p∈N , a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by O~(n10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of O~(n4+1/3⋅ϕ8/3) . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to O~(n4+1/3−1/d⋅ϕ8/3) . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by O~(n4−1/d⋅ϕ) . In addition, we prove an upper bound of O(ϕ√d) on the expected approximation factor with respect to all L p metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∼1/σ d

    First-Hitting Times Under Additive Drift

    Full text link
    For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process -- the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift

    Quantum Portfolios

    Get PDF
    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can only find the solution of hard problems probabilistically. Thus the efficiency of the algorithms has to be characterized both by the expected time to completion {\it and} the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-SAT.Comment: revision includes additional data and corrects minor typo

    Minimizing energy below the glass thresholds

    Full text link
    Focusing on the optimization version of the random K-satisfiability problem, the MAX-K-SAT problem, we study the performance of the finite energy version of the Survey Propagation (SP) algorithm. We show that a simple (linear time) backtrack decimation strategy is sufficient to reach configurations well below the lower bound for the dynamic threshold energy and very close to the analytic prediction for the optimal ground states. A comparative numerical study on one of the most efficient local search procedures is also given.Comment: 12 pages, submitted to Phys. Rev. E, accepted for publicatio
    corecore