1,120,874 research outputs found

    On Approximating the Number of kk-cliques in Sublinear Time

    Full text link
    We study the problem of approximating the number of kk-cliques in a graph when given query access to the graph. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let nn denote the number of vertices in the graph, mm the number of edges, and CkC_k the number of kk-cliques. We design an algorithm that outputs a (1+ε)(1+\varepsilon)-approximation (with high probability) for CkC_k, whose expected query complexity and running time are O\left(\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}\right)\poly(\log n,1/\varepsilon,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for Ck=ω(mk/21)C_k = \omega(m^{k/2-1}). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (up to the dependence on logn\log n, 1/ε1/\varepsilon and kk). The previous results in this vein are by Feige (SICOMP 06) and by Goldreich and Ron (RSA 08) for edge counting (k=2k=2) and by Eden et al. (FOCS 2015) for triangle counting (k=3k=3). Our result matches the complexities of these results. The previous result by Eden et al. hinges on a certain amortization technique that works only for triangle counting, and does not generalize for larger cliques. We obtain a general algorithm that works for any k3k\geq 3 by designing a procedure that samples each kk-clique incident to a given set SS of vertices with approximately equal probability. The primary difficulty is in finding cliques incident to purely high-degree vertices, since random sampling within neighbors has a low success probability. This is achieved by an algorithm that samples uniform random high degree vertices and a careful tradeoff between estimating cliques incident purely to high-degree vertices and those that include a low-degree vertex

    A Method to Identify and Analyze Biological Programs through Automated Reasoning.

    Get PDF
    Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function

    IST Austria Technical Report

    Get PDF
    We study algorithmic questions for concurrent systems where the transitions are labeled from a complete, closed semiring, and path properties are algebraic with semiring operations. The algebraic path properties can model dataflow analysis problems, the shortest path problem, and many other natural properties that arise in program analysis. We consider that each component of the concurrent system is a graph with constant treewidth, and it is known that the controlflow graphs of most programs have constant treewidth. We allow for multiple possible queries, which arise naturally in demand driven dataflow analysis problems (e.g., alias analysis). The study of multiple queries allows us to consider the tradeoff between the resource usage of the \emph{one-time} preprocessing and for \emph{each individual} query. The traditional approaches construct the product graph of all components and apply the best-known graph algorithm on the product. In the traditional approach, even the answer to a single query requires the transitive closure computation (i.e., the results of all possible queries), which provides no room for tradeoff between preprocessing and query time. Our main contributions are algorithms that significantly improve the worst-case running time of the traditional approach, and provide various tradeoffs depending on the number of queries. For example, in a concurrent system of two components, the traditional approach requires hexic time in the worst case for answering one query as well as computing the transitive closure, whereas we show that with one-time preprocessing in almost cubic time, each subsequent query can be answered in at most linear time, and even the transitive closure can be computed in almost quartic time. Furthermore, we establish conditional optimality results that show that the worst-case running times of our algorithms cannot be improved without achieving major breakthroughs in graph algorithms (such as improving the worst-case bounds for the shortest path problem in general graphs whose current best-known bound has not been improved in five decades). Finally, we provide a prototype implementation of our algorithms which significantly outperforms the existing algorithmic methods on several benchmarks

    Fire design method for concrete filled tubular columns based on equivalent concrete core cross-section

    Get PDF
    In this work, a method for a realistic cross-sectional temperature prediction and a simplified fire design method for circular concrete filled tubular columns under axial load are presented. The generalized lack of simple proposals for computing the cross-sectional temperature field of CFT columns when their fire resistance is evaluated is evident. Even Eurocode 4 Part 1-2, which provides one of the most used fire design methods for composite columns, does not give any indications to the designers for computing the cross-sectional temperatures. Given the clear necessity of having an available method for that purpose, in this paper a set of equations for computing the temperature distribution of circular CFT columns filled with normal strength concrete is provided. First, a finite differences thermal model is presented and satisfactorily validated against experimental results for any type of concrete infill. This model consideres the gap at steel-concrete interface, the moisture content in concrete and the temperature dependent properties of both materials. Using this model, a thermal parametric analysis is executed and from the corresponding statistical analysis of the data generated, the practical expressions are derived. The second part of the paper deals with the development of a fire design method for axially loaded CFT columns based on the general rules stablished in EN 1994-1-1 and employing the concept of room temperature equivalent concrete core cross-section. In order to propose simple equations, a multiple nonlinear regression analysis is made with the numerical results generated through a thermo-mechanical parametric analysis. Once more, predicted results are compared to experimental values giving a reasonable accuracy and slightly safe results.The authors would like to express their sincere gratitude to the Spanish Ministry of Economy and Competitivity for the help provided through the project BIA2012-33144, and to the European Community for the FEDER funds.Ibáñez Usach, C.; Aguado, JV.; Romero, ML.; Espinós Capilla, A.; Hospitaler Pérez, A. (2015). Fire design method for concrete filled tubular columns based on equivalent concrete core cross-section. Fire Safety Journal. 78:10-23. https://doi.org/10.1016/j.firesaf.2015.07.009S10237

    Urea-induced ROS generation causes insulin resistance in mice with chronic renal failure.

    Get PDF
    Although supraphysiological concentrations of urea are known to increase oxidative stress in cultured cells, it is generally thought that the elevated levels of urea in chronic renal failure patients have negligible toxicity. We previously demonstrated that ROS increase intracellular protein modification by O-linked β-N-acetylglucosamine (O-GlcNAc), and others showed that increased modification of insulin signaling molecules by O-GlcNAc reduces insulin signal transduction. Because both oxidative stress and insulin resistance have been observed in patients with end-stage renal disease, we sought to determine the role of urea in these phenotypes. Treatment of 3T3-L1 adipocytes with urea at disease-relevant concentrations induced ROS production, caused insulin resistance, increased expression of adipokines retinol binding protein 4 (RBP4) and resistin, and increased O-GlcNAc–modified insulin signaling molecules. Investigation of a mouse model of surgically induced renal failure (uremic mice) revealed increased ROS production, modification of insulin signaling molecules by O-GlcNAc, and increased expression of RBP4 and resistin in visceral adipose tissue. Uremic mice also displayed insulin resistance and glucose intolerance, and treatment with an antioxidant SOD/catalase mimetic normalized these defects. The SOD/catalase mimetic treatment also prevented the development of insulin resistance in normal mice after urea infusion. These data suggest that therapeutic targeting of urea-induced ROS may help reduce the high morbidity and mortality caused by end-stage renal disease

    On the Fiedler value of large planar graphs

    Get PDF
    The Fiedler value λ2\lambda_2, also known as algebraic connectivity, is the second smallest Laplacian eigenvalue of a graph. We study the maximum Fiedler value among all planar graphs GG with nn vertices, denoted by λ2max\lambda_{2\max}, and we show the bounds 2+Θ(1n2)λ2max2+O(1n)2+\Theta(\frac{1}{n^2}) \leq \lambda_{2\max} \leq 2+O(\frac{1}{n}). We also provide bounds on the maximum Fiedler value for the following classes of planar graphs: Bipartite planar graphs, bipartite planar graphs with minimum vertex degree~3, and outerplanar graphs. Furthermore, we derive almost tight bounds on λ2max\lambda_{2\max} for two more classes of graphs, those of bounded genus and KhK_h-minor-free graphs.Comment: 21 pages, 4 figures, 1 table. Version accepted in Linear Algebra and Its Application

    IST Austria Technical Report

    Get PDF
    We study the problem of developing efficient approaches for proving termination of recursive programs with one-dimensional arrays. Ranking functions serve as a sound and complete approach for proving termination of non-recursive programs without array operations. First, we generalize ranking functions to the notion of measure functions, and prove that measure functions (i) provide a sound method to prove termination of recursive programs (with one-dimensional arrays), and (ii) is both sound and complete over recursive programs without array operations. Our second contribution is the synthesis of measure functions of specific forms in polynomial time. More precisely, we prove that (i) polynomial measure functions over recursive programs can be synthesized in polynomial time through Farkas’ Lemma and Handelman’s Theorem, and (ii) measure functions involving logarithm and exponentiation can be synthesized in polynomial time through abstraction of logarithmic or exponential terms and Handelman’s Theorem. A key application of our method is the worst-case analysis of recursive programs. While previous methods obtain worst-case polynomial bounds of the form O(n^k), where k is an integer, our polynomial time methods can synthesize bounds of the form O(n log n), as well as O(n^x), where x is not an integer. We show the applicability of our automated technique to obtain worst-case complexity of classical recursive algorithms such as (i) Merge-Sort, the divideand- conquer algorithm for the Closest-Pair problem, where we obtain O(n log n) worst-case bound, and (ii) Karatsuba’s algorithm for polynomial multiplication and Strassen’s algorithm for matrix multiplication, where we obtain O(n^x) bound, where x is not an integer and close to the best-known bounds for the respective algorithms. Finally, we present experimental results to demonstrate the effectiveness of our approach
    corecore