163 research outputs found

    Hardness of Exact Distance Queries in Sparse Graphs Through Hub Labeling

    Full text link
    A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of hub labelings, where a node vGv \in G stores its distance to the so-called hubs SvVS_v \subseteq V, chosen so that for any u,vVu,v \in V there is wSuSvw \in S_u \cap S_v belonging to some shortest uvuv path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block. Our interest lies in hub labelings of sparse graphs, i.e., those with E(G)=O(n)|E(G)| = O(n), for which we show a lowerbound of n2O(logn)\frac{n}{2^{O(\sqrt{\log n})}} for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size O(nRS(n)c)O(\frac{n}{RS(n)^{c}}) for some 0<c<10 < c < 1, where RS(n)RS(n) is the so-called Ruzsa-Szemer{\'e}di function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to n2(logn)o(1)\frac{n}{2^{(\log n)^{o(1)}}} would require a breakthrough in the study of lower bounds on RS(n)RS(n), which have resisted substantial improvement in the last 70 years. For general distance labeling of sparse graphs, we show a lowerbound of 12O(logn)SumIndex(n)\frac{1}{2^{O(\sqrt{\log n})}} SumIndex(n), where SumIndex(n)SumIndex(n) is the communication complexity of the Sum-Index problem over ZnZ_n. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be Θ(n2(logn)c)\Theta(\frac{n}{2^{(\log n)^c}}) for some 0<c<10<c < 1

    Growth in solvable subgroups of GL_r(Z/pZ)

    Get PDF
    Let K=Z/pZK=Z/pZ and let AA be a subset of \GL_r(K) such that is solvable. We reduce the study of the growth of $A$ under the group operation to the nilpotent setting. Specifically we prove that either $A$ grows rapidly (meaning $|A\cdot A\cdot A|\gg |A|^{1+\delta}$), or else there are groups $U_R$ and $S$, with $S/U_R$ nilpotent such that $A_k\cap S$ is large and $U_R\subseteq A_k$, where $k$ is a bounded integer and $A_k = \{x_1 x_2...b x_k : x_i \in A \cup A^{-1} \cup {1}}$. The implied constants depend only on the rank $r$ of $\GL_r(K)$. When combined with recent work by Pyber and Szab\'o, the main result of this paper implies that it is possible to draw the same conclusions without supposing that is solvable.Comment: 46 pages. This version includes revisions recommended by an anonymous referee including, in particular, the statement of a new theorem, Theorem

    Hard Instances of the Constrained Discrete Logarithm Problem

    Get PDF
    The discrete logarithm problem (DLP) generalizes to the constrained DLP, where the secret exponent xx belongs to a set known to the attacker. The complexity of generic algorithms for solving the constrained DLP depends on the choice of the set. Motivated by cryptographic applications, we study sets with succinct representation for which the constrained DLP is hard. We draw on earlier results due to Erd\"os et al. and Schnorr, develop geometric tools such as generalized Menelaus' theorem for proving lower bounds on the complexity of the constrained DLP, and construct sets with succinct representation with provable non-trivial lower bounds

    Evaluation of contraceptive methods in women with congenital heart disease in Germany, Hungary and Japan

    Get PDF
    Aims For women with congenital heart defects (CHD), pregnancy may pose a health risk. Sexually active women with CHD without the desire for own children or for whom pregnancy would imply considerable health risks require adequate counselling regarding appropriate contraception. This study gathers data on the contraceptive behaviour of women with CHD from three different cultural regions. Methods and results 634 women with CHD from Germany, Hungary and Japan were surveyed regarding contraception and contraceptive methods (CM) used. The patients were divided into groups according to different criteria such as pregnancy associated cardiovascular risk or "safety" of the contraceptive methods used. 59% of the study participants had already gained experience with CM. The average age at the first time of use was 18.4 years; the German patients were significantly younger at the first time of using a CM than those from Hungary and Japan. Overall the condom was the method used the most (38%), followed by oral contraceptives (30%) and coitus interruptus (11%). The range of CM used in Japan was much smaller than that in Germany or Hungary. Unsafe contraceptives were currently, or had previously been used, by 29% of the surveyed patients (Germany: 25%, Hungary: 37%, Japan: 32%). Conclusion Most women with CHD use CM. There are differences between the participating countries. Adequate contraceptive counselling of women with CHD requires considering the individual characteristics of each patient, including potential contraindications. For choosing an appropriate CM, both the methods' "safety", as well as the maternal cardiovascular risk, are important. © 2015 Elsevier Ireland Ltd

    The critical window for the classical Ramsey-Tur\'an problem

    Get PDF
    The first application of Szemer\'edi's powerful regularity method was the following celebrated Ramsey-Tur\'an result proved by Szemer\'edi in 1972: any K_4-free graph on N vertices with independence number o(N) has at most (1/8 + o(1)) N^2 edges. Four years later, Bollob\'as and Erd\H{o}s gave a surprising geometric construction, utilizing the isoperimetric inequality for the high dimensional sphere, of a K_4-free graph on N vertices with independence number o(N) and (1/8 - o(1)) N^2 edges. Starting with Bollob\'as and Erd\H{o}s in 1976, several problems have been asked on estimating the minimum possible independence number in the critical window, when the number of edges is about N^2 / 8. These problems have received considerable attention and remained one of the main open problems in this area. In this paper, we give nearly best-possible bounds, solving the various open problems concerning this critical window.Comment: 34 page

    Parallel Valuation of the EQ-5D-3L and EQ-5D-5L by Time Trade-Offin Hungary

    Get PDF
    Objectives The wording of the Hungarian EQ-5D-3L and EQ-5D-5L descriptive systems differ a great deal. This study aimed to (1) develop EQ-5D-3L and EQ-5D-5L value sets for Hungary from a common sample, and (2) compare how level wording affected valuations. Methods In 2018 to 2019, 1000 respondents, representative of the Hungarian general population, completed composite time trade-off tasks. Pooled heteroscedastic Tobit models were used to estimate value sets. Value set characteristics, single-level transition utilities from adjacent corner health states, and mean transition utilities for all possible health states were compared between the EQ-5D-3L and EQ-5D-5L. Results Health utilities ranged from -0.865 to 1 for the EQ-5D-3L and -0.848 to 1 for the EQ-5D-5L. The relative importance of the 5 EQ-5D-5L dimensions was as follows: mobility, pain/discomfort, self-care, anxiety/depression, and usual activities. A similar preference ranking was observed for the EQ-5D-3L with self-care being more important than pain/discomfort. The EQ-5D-5L demonstrated lower ceiling effects (range of utilities for the mildest states: 0.900-0.958 [3L] vs 0.955-0.965 [5L]) and better consistency of mean transition utilities across the range of scale. Changing “confined to bed” (3L) to “unable to walk” (5L) had a large positive impact on utilities. Smaller changes with more negative wording in the other dimensions (eg, “very much anxious/feeling down a lot” [3L] vs “extremely anxious/depressed” [5L]) had a modest negative impact on utilities. Conclusion This study developed value sets of the EQ-5D-3L and EQ-5D-5L for Hungary. Our findings contribute to the understanding of how the wording of descriptive systems affects the estimates of utilities

    Distal Versus Conventional Radial Access for Coronary Angiography and Intervention The DISCO RADIAL Trial

    Get PDF
    BACKGROUND Currently, transradial access (TRA) is the recommended access for coronary procedures because of increased safety, with radial artery occlusion (RAO) being its most frequent complication, which will increasingly affect patients undergoing multiple procedures during their lifetimes. Recently, distal radial access (DRA) has emerged as a promising alternative access to minimize RAO risk. A large-scale, international, randomized trial comparing RAO with TRA and DRA is lacking. OBJECTIVES The aim of this study was to assess the superiority of DRA compared with conventional TRA with respect to forearm RAO. METHODS DISCO RADIAL (Distal vs Conventional Radial Access) was an international, multicenter, randomized controlled trial in which patients with indications for percutaneous coronary procedure using a 6-F Slender sheath were randomized to DRA or TRA with systematic implementation of best practices to reduce RAO. The primary endpoint was the incidence of forearm RAO assessed by vascular ultrasound at discharge. Secondary endpoints include crossover, hemostasis time, and access site-related complications. RESULTS Overall, 657 patients underwent TRA, and 650 patients underwent DRA. Forearm RAO did not differ between groups (0.91% vs 0.31%; P = 0.29). Patent hemostasis was achieved in 94.4% of TRA patients. Crossover rates were higher with DRA (3.5% vs 7.4%; P = 0.002), and median hemostasis time was shorter (180 vs 153 minutes; P < 0.001). Radial artery spasm occurred more with DRA (2.7% vs 5.4%; P = 0.015). Overall bleeding events and vascular complications did not differ between groups. CONCLUSIONS With the implementation of a rigorous hemostasis protocol, DRA and TRA have equally low RAO rates. DRA is associated with a higher crossover rate but a shorter hemostasis time. (C) 2022 The Authors. Published by Elsevier on behalf of the American College of Cardiology Foundation

    Deep Learning Paradigm for Cardiovascular Disease/Stroke Risk Stratification in Parkinson’s Disease Affected by COVID‐19: A Narrative Review

    Get PDF
    Background and Motivation: Parkinson’s disease (PD) is one of the most serious, non-curable, and expensive to treat. Recently, machine learning (ML) has shown to be able to predict cardiovascular/stroke risk in PD patients. The presence of COVID‐19 causes the ML systems to be-come severely non‐linear and poses challenges in cardiovascular/stroke risk stratification. Further, due to comorbidity, sample size constraints, and poor scientific and clinical validation techniques, there have been no well‐explained ML paradigms. Deep neural networks are powerful learning machines that generalize non‐linear conditions. This study presents a novel investigation of deep learning (DL) solutions for CVD/stroke risk prediction in PD patients affected by the COVID‐19 framework. Method: The PRISMA search strategy was used for the selection of 292 studies closely associated with the effect of PD on CVD risk in the COVID‐19 framework. We study the hypothesis that PD in the presence of COVID‐19 can cause more harm to the heart and brain than in non‐ COVID‐19 conditions. COVID‐19 lung damage severity can be used as a covariate during DL training model designs. We, therefore, propose a DL model for the estimation of, (i) COVID‐19 lesions in computed tomography (CT) scans and (ii) combining the covariates of PD, COVID‐19 lesions, office and laboratory arterial atherosclerotic image‐based biomarkers, and medicine usage for the PD patients for the design of DL point‐based models for CVD/stroke risk stratification. Results: We validated the feasibility of CVD/stroke risk stratification in PD patients in the presence of a COVID‐ 19 environment and this was also verified. DL architectures like long short‐term memory (LSTM), and recurrent neural network (RNN) were studied for CVD/stroke risk stratification showing powerful designs. Lastly, we examined the artificial intelligence bias and provided recommendations for early detection of CVD/stroke in PD patients in the presence of COVID‐19. Conclusion: The DL is a very powerful tool for predicting CVD/stroke risk in PD patients affected by COVID‐19. © 2022 by the authors. Licensee MDPI, Basel, Switzerland

    Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment

    Get PDF
    Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals. © 2022 by the authors
    corecore