747 research outputs found

    The Complexity of Routing with Few Collisions

    Full text link
    We study the computational complexity of routing multiple objects through a network in such a way that only few collisions occur: Given a graph GG with two distinct terminal vertices and two positive integers pp and kk, the question is whether one can connect the terminals by at least pp routes (e.g. paths) such that at most kk edges are time-wise shared among them. We study three types of routes: traverse each vertex at most once (paths), each edge at most once (trails), or no such restrictions (walks). We prove that for paths and trails the problem is NP-complete on undirected and directed graphs even if kk is constant or the maximum vertex degree in the input graph is constant. For walks, however, it is solvable in polynomial time on undirected graphs for arbitrary kk and on directed graphs if kk is constant. We additionally study for all route types a variant of the problem where the maximum length of a route is restricted by some given upper bound. We prove that this length-restricted variant has the same complexity classification with respect to paths and trails, but for walks it becomes NP-complete on undirected graphs

    On the complexity of computing the kk-restricted edge-connectivity of a graph

    Full text link
    The \emph{kk-restricted edge-connectivity} of a graph GG, denoted by λk(G)\lambda_k(G), is defined as the minimum size of an edge set whose removal leaves exactly two connected components each containing at least kk vertices. This graph invariant, which can be seen as a generalization of a minimum edge-cut, has been extensively studied from a combinatorial point of view. However, very little is known about the complexity of computing λk(G)\lambda_k(G). Very recently, in the parameterized complexity community the notion of \emph{good edge separation} of a graph has been defined, which happens to be essentially the same as the kk-restricted edge-connectivity. Motivated by the relevance of this invariant from both combinatorial and algorithmic points of view, in this article we initiate a systematic study of its computational complexity, with special emphasis on its parameterized complexity for several choices of the parameters. We provide a number of NP-hardness and W[1]-hardness results, as well as FPT-algorithms.Comment: 16 pages, 4 figure

    Conformal field theories in anti-de Sitter space

    Get PDF
    In this paper we discuss the dynamics of conformal field theories on anti-de Sitter space, focussing on the special case of the N=4 supersymmetric Yang-Mills theory on AdS_4. We argue that the choice of boundary conditions, in particular for the gauge field, has a large effect on the dynamics. For example, for weak coupling, one of two natural choices of boundary conditions for the gauge field leads to a large N deconfinement phase transition as a function of the temperature, while the other does not. For boundary conditions that preserve supersymmetry, the strong coupling dynamics can be analyzed using S-duality (relevant for g_{YM} >> 1), utilizing results of Gaiotto and Witten, as well as by using the AdS/CFT correspondence (relevant for large N and large 't Hooft coupling). We argue that some very specific choices of boundary conditions lead to a simple dual gravitational description for this theory, while for most choices the gravitational dual is not known. In the cases where the gravitational dual is known, we discuss the phase structure at large 't Hooft coupling.Comment: 57 pages, 1 figure. v2: fixed typo

    The approximability of the String Barcoding problem

    Get PDF
    The String Barcoding (SBC) problem, introduced by Rash and Gusfield (RECOMB, 2002), consists in finding a minimum set of substrings that can be used to distinguish between all members of a set of given strings. In a computational biology context, the given strings represent a set of known viruses, while the substrings can be used as probes for an hybridization experiment via microarray. Eventually, one aims at the classification of new strings (unknown viruses) through the result of the hybridization experiment. In this paper we show that SBC is as hard to approximate as Set Cover. Furthermore, we show that the constrained version of SBC (with probes of bounded length) is also hard to approximate. These negative results are tight

    Extraction, integration and analysis of alternative splicing and protein structure distributed information

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Alternative splicing has been demonstrated to affect most of human genes; different isoforms from the same gene encode for proteins which differ for a limited number of residues, thus yielding similar structures. This suggests possible correlations between alternative splicing and protein structure. In order to support the investigation of such relationships, we have developed the Alternative Splicing and Protein Structure Scrutinizer (PASS), a Web application to automatically extract, integrate and analyze human alternative splicing and protein structure data sparsely available in the Alternative Splicing Database, Ensembl databank and Protein Data Bank. Primary data from these databases have been integrated and analyzed using the Protein Identifier Cross-Reference, BLAST, CLUSTALW and FeatureMap3D software tools.</p> <p>Results</p> <p>A database has been developed to store the considered primary data and the results from their analysis; a system of Perl scripts has been implemented to automatically create and update the database and analyze the integrated data; a Web interface has been implemented to make the analyses easily accessible; a database has been created to manage user accesses to the PASS Web application and store user's data and searches.</p> <p>Conclusion</p> <p>PASS automatically integrates data from the Alternative Splicing Database with protein structure data from the Protein Data Bank. Additionally, it comprehensively analyzes the integrated data with publicly available well-known bioinformatics tools in order to generate structural information of isoform pairs. Further analysis of such valuable information might reveal interesting relationships between alternative splicing and protein structure differences, which may be significantly associated with different functions.</p

    Green economic growth from a developmental perspective

    Get PDF

    Strong Interactions of Single Atoms and Photons near a Dielectric Boundary

    Get PDF
    Modern research in optical physics has achieved quantum control of strong interactions between a single atom and one photon within the setting of cavity quantum electrodynamics (cQED). However, to move beyond current proof-of-principle experiments involving one or two conventional optical cavities to more complex scalable systems that employ N >> 1 microscopic resonators requires the localization of individual atoms on distance scales < 100 nm from a resonator's surface. In this regime an atom can be strongly coupled to a single intracavity photon while at the same time experiencing significant radiative interactions with the dielectric boundaries of the resonator. Here, we report an initial step into this new regime of cQED by way of real-time detection and high-bandwidth feedback to select and monitor single Cesium atoms localized ~100 nm from the surface of a micro-toroidal optical resonator. We employ strong radiative interactions of atom and cavity field to probe atomic motion through the evanescent field of the resonator. Direct temporal and spectral measurements reveal both the significant role of Casimir-Polder attraction and the manifestly quantum nature of the atom-cavity dynamics. Our work sets the stage for trapping atoms near micro- and nano-scopic optical resonators for applications in quantum information science, including the creation of scalable quantum networks composed of many atom-cavity systems that coherently interact via coherent exchanges of single photons.Comment: 8 pages, 5 figures, Supplemental Information included as ancillary fil

    Contrast-enhanced computed tomography assessment of aortic stenosis

    Get PDF
    Abstract Objectives Non-contrast CT aortic valve calcium scoring ignores the contribution of valvular fibrosis in aortic stenosis. We assessed aortic valve calcific and non-calcific disease using contrast-enhanced CT. Methods This was a post hoc analysis of 164 patients (median age 71 (IQR 66–77) years, 78% male) with aortic stenosis (41 mild, 89 moderate, 34 severe; 7% bicuspid) who underwent echocardiography and contrast-enhanced CT as part of imaging studies. Calcific and non-calcific (fibrosis) valve tissue volumes were quantified and indexed to annulus area, using Hounsfield unit thresholds calibrated against blood pool radiodensity. The fibrocalcific ratio assessed the relative contributions of valve fibrosis and calcification. The fibrocalcific volume (sum of indexed non-calcific and calcific volumes) was compared with aortic valve peak velocity and, in a subgroup, histology and valve weight. Results Contrast-enhanced CT calcium volumes correlated with CT calcium score (r=0.80, p<0.001) and peak aortic jet velocity (r=0.55, p<0.001). The fibrocalcific ratio decreased with increasing aortic stenosis severity (mild: 1.29 (0.98–2.38), moderate: 0.87 (1.48–1.72), severe: 0.47 (0.33–0.78), p<0.001) while the fibrocalcific volume increased (mild: 109 (75–150), moderate: 191 (117–253), severe: 274 (213–344) mm3/cm2). Fibrocalcific volume correlated with ex vivo valve weight (r=0.72, p<0.001). Compared with the Agatston score, fibrocalcific volume demonstrated a better correlation with peak aortic jet velocity (r=0.59 and r=0.67, respectively), particularly in females (r=0.38 and r=0.72, respectively). Conclusions Contrast-enhanced CT assessment of aortic valve calcific and non-calcific volumes correlates with aortic stenosis severity and may be preferable to non-contrast CT when fibrosis is a significant contributor to valve obstruction

    Bianchi Type-II String Cosmological Models in Normal Gauge for Lyra's Manifold with Constant Deceleration Parameter

    Full text link
    The present study deals with a spatially homogeneous and anisotropic Bianchi-II cosmological models representing massive strings in normal gauge for Lyra's manifold by applying the variation law for generalized Hubble's parameter that yields a constant value of deceleration parameter. The variation law for Hubble's parameter generates two types of solutions for the average scale factor, one is of power-law type and other is of the exponential form. Using these two forms, Einstein's modified field equations are solved separately that correspond to expanding singular and non-singular models of the universe respectively. The energy-momentum tensor for such string as formulated by Letelier (1983) is used to construct massive string cosmological models for which we assume that the expansion (θ\theta) in the model is proportional to the component σ 11\sigma^{1}_{~1} of the shear tensor σij\sigma^{j}_{i}. This condition leads to A=(BC)mA = (BC)^{m}, where A, B and C are the metric coefficients and m is proportionality constant. Our models are in accelerating phase which is consistent to the recent observations. It has been found that the displacement vector β\beta behaves like cosmological term Λ\Lambda in the normal gauge treatment and the solutions are consistent with recent observations of SNe Ia. It has been found that massive strings dominate in the decelerating universe whereas strings dominate in the accelerating universe. Some physical and geometric behaviour of these models are also discussed.Comment: 24 pages, 10 figure
    corecore