444 research outputs found

    Augmenting graphs to minimize the diameter

    Full text link
    We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT 4-approximation algorithm for the problem.Comment: 15 pages, 3 figure

    On dualization in products of forests, in

    Get PDF
    Abstract. Let P = P1 ×...×Pn be the product of n partially ordered sets, each with an acyclic precedence graph in which either the in-degree or the out-degree of each element is bounded. Given a subset A⊆P,it is shown that the set of maximal independent elements of A in P can be incrementally generated in quasi-polynomial time. We discuss some applications in data mining related to this dualization problem

    13-Series resolvins mediate the leukocyte-platelet actions of atorvastatin and pravastatin in inflammatory arthritis

    Get PDF
    This work was supported by funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (Grant 677542), a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (Grant 107613/Z/15/Z), and the Barts Charity (Grant MGU0343). This work was also funded, in part, by Medical Research Council Advance Course Masters (Grant MR/J015741/1). The authors declare no conflicts of interest

    An Algorithm for Dualization in Products of Lattices and Its Applications

    Full text link
    Let \cL=\cL_1×⋅s×\cL_n be the product of n lattices, each of which has a bounded width. Given a subset \cA\subseteq\cL, we show that the problem of extending a given partial list of maximal independent elements of \cA in \cL can be solved in quasi-polynomial time. This result implies, in particular, that the problem of generating all minimal infrequent elements for a database with semi-lattice attributes, and the problem of generating all maximal boxes that contain at most a specified number of points from a given n-dimensional point set, can both be solved in incremental quasi-polynomial time

    Measuring and forecasting progress in education: what about early childhood?

    Get PDF
    A recent Nature article modelled within-country inequalities in primary, secondary, and tertiary education and forecast progress towards Sustainable Development Goal (SDG) targets related to education (SDG 4). However, their paper entirely overlooks inequalities in achieving Target 4.2, which aims to achieve universal access to quality early childhood development, care and preschool education by 2030. This is an important omission because of the substantial brain, cognitive and socioemotional developments that occur in early life and because of increasing evidence of early-life learning's large impacts on subsequent education and lifetime wellbeing. We provide an overview of this evidence and use new analyses to illustrate medium- and long-term implications of early learning, first by presenting associations between pre-primary programme participation and adolescent mathematics and science test scores in 73 countries and secondly, by estimating the costs of inaction (not making pre-primary programmes universal) in terms of forgone lifetime earnings in 134 countries. We find considerable losses, comparable to or greater than current governmental expenditures on all education (as percentages of GDP), particularly in low- and lower-middle-income countries. In addition to improving primary, secondary and tertiary schooling, we conclude that to attain SDG 4 and reduce inequalities in a post-COVID era, it is essential to prioritize quality early childhood care and education, including adopting policies that support families to promote early learning and their children's education

    The impact of emotional well-being on long-term recovery and survival in physical illness: a meta-analysis

    Get PDF
    This meta-analysis synthesized studies on emotional well-being as predictor of the prognosis of physical illness, while in addition evaluating the impact of putative moderators, namely constructs of well-being, health-related outcome, year of publication, follow-up time and methodological quality of the included studies. The search in reference lists and electronic databases (Medline and PsycInfo) identified 17 eligible studies examining the impact of general well-being, positive affect and life satisfaction on recovery and survival in physically ill patients. Meta-analytically combining these studies revealed a Likelihood Ratio of 1.14, indicating a small but significant effect. Higher levels of emotional well-being are beneficial for recovery and survival in physically ill patients. The findings show that emotional well-being predicts long-term prognosis of physical illness. This suggests that enhancement of emotional well-being may improve the prognosis of physical illness, which should be investigated by future research

    Longest Increasing Subsequence under Persistent Comparison Errors

    Full text link
    We study the problem of computing a longest increasing subsequence in a sequence SS of nn distinct elements in the presence of persistent comparison errors. In this model, every comparison between two elements can return the wrong result with some fixed (small) probability p p , and comparisons cannot be repeated. Computing the longest increasing subsequence exactly is impossible in this model, therefore, the objective is to identify a subsequence that (i) is indeed increasing and (ii) has a length that approximates the length of the longest increasing subsequence. We present asymptotically tight upper and lower bounds on both the approximation factor and the running time. In particular, we present an algorithm that computes an O(logn)O(\log n)-approximation in time O(nlogn)O(n\log n), with high probability. This approximation relies on the fact that that we can approximately sort nn elements in O(nlogn)O(n\log n) time such that the maximum dislocation of an element is at most O(logn)O(\log n). For the lower bounds, we prove that (i) there is a set of sequences, such that on a sequence picked randomly from this set every algorithm must return an Ω(logn)\Omega(\log n)-approximation with high probability, and (ii) any O(logn)O(\log n)-approximation algorithm for longest increasing subsequence requires Ω(nlogn)\Omega(n \log n) comparisons, even in the absence of errors
    corecore