993 research outputs found

    Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes

    Full text link
    We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.Comment: 18 pages; an abbreviated version of this paper is to appear at the 9th Worksh. Algorithms and Data Structure

    A vascularis intervenciókat követő restenosis vizsgálata klinikai és kísérletes tanulmányokban

    Get PDF
    Restenosis following endovascular interventions is the main limitation of their long-term success. The incidence of restenosis varies according to the method (stenting, endarterectomy) and the treated vascular region, but the pathomechanism and risk factors are similar. The current article reviews of the author's previous studies in this field. In clinical studies, we compared the restenosis rate after carotid artery stenting and carotid endarterectomy. We also analyzed the complement activation profile after these interventions. In another study, we investigated the role of two polymorphisms of the estrogen receptor alpha in the occurrence of carotid restenosis after either carotid artery stenting or carotid endarterectomy. In an animal model of carotid endarterectomy, we studied the role of the nitrite-oxide-cyclic guanosine monophosphate signaling and the effect of the phosphodiesterase-5 inhibitor therapy in neointimal hyperplasia. Our results suggest that higher incidence of restenosis following carotid endarterectomy can be correlated with the more highly expressed complement activation after this type of carotid intervention. Polymorphisms in the estrogen receptor alpha gene could contribute to the restenosis formation, especially in women. Neointimal hyperplasia can be attenuated by increased cyclic guanosine monophosphate signaling

    Comb and Branch‐on‐Branch Model Polystyrenes with Exceptionally High Strain Hardening Factor SHF > 1000 and Their Impact on Physical Foaming

    Get PDF
    The influence of topology on the strain hardening in uniaxial elongation is investigated using monodisperse comb and dendrigraft model polystyrenes (PS) synthesized via living anionic polymerization. A backbone with a molecular weight of Mw,bb_{w,bb} = 310 kg mol1^{–1} is used for all materials, while a number of 100 short (SCB, Mw,scb_{w,scb} = 15 kg mol1^{–1}) or long chain branches (LCB, Mw,lcb_{w,lcb} = 40 kg mol1^{–1}) are grafted onto the backbone. The synthesized LCB comb serves as precursor for the dendrigraft-type branch-on-branch (bob) structures to add a second generation of branches (SCB, Mw,scb_{w,scb} ≈ 14 kg mol1^{–1}) that is varied in number from 120 to 460. The SCB and LCB combs achieve remarkable strain hardening factors (SHF) of around 200 at strain rates greater than 0.1 s1^{–1}. In contrast, the bob PS reach exceptionally high SHF of 1750 at very low strain rates of 0.005 s1^{–1} using a tilted sample placement to extend the maximum Hencky strain from 4 to 6. To the best of the authors’ knowledge, SHF this high have never been reported for polymer melts. Furthermore, the batch foaming with CO2_{2} is investigated and the volume expansions of the resulting polymer foams are correlated to the uniaxial elongational properties

    An Efficient Data Structure for Dynamic Two-Dimensional Reconfiguration

    Full text link
    In the presence of dynamic insertions and deletions into a partially reconfigurable FPGA, fragmentation is unavoidable. This poses the challenge of developing efficient approaches to dynamic defragmentation and reallocation. One key aspect is to develop efficient algorithms and data structures that exploit the two-dimensional geometry of a chip, instead of just one. We propose a new method for this task, based on the fractal structure of a quadtree, which allows dynamic segmentation of the chip area, along with dynamically adjusting the necessary communication infrastructure. We describe a number of algorithmic aspects, and present different solutions. We also provide a number of basic simulations that indicate that the theoretical worst-case bound may be pessimistic.Comment: 11 pages, 12 figures; full version of extended abstract that appeared in ARCS 201

    From the zero-field metal-insulator transition in two dimensions to the quantum Hall transition: a percolation-effective-medium theory

    Full text link
    Effective-medium theory is applied to the percolation description of the metal-insulator transition in two dimensions with emphasis on the continuous connection between the zero-magnetic-field transition and the quantum Hall transition. In this model the system consists of puddles connected via saddle points, and there is loss of quantum coherence inside the puddles. The effective conductance of the network is calculated using appropriate integration over the distribution of conductances, leading to a determination of the magnetic field dependence of the critical density. Excellent quantitative agreement is obtained with the experimental data, which allows an estimate of the puddle physical parameters

    Autonomous decision-making against induced seismicity in deep fluid injections

    Full text link
    The rise in the frequency of anthropogenic earthquakes due to deep fluid injections is posing serious economic, societal, and legal challenges to geo-energy and waste-disposal projects. We propose an actuarial approach to mitigate this risk, first by defining an autonomous decision-making process based on an adaptive traffic light system (ATLS) to stop risky injections, and second by quantifying a "cost of public safety" based on the probability of an injection-well being abandoned. The ATLS underlying statistical model is first confirmed to be representative of injection-induced seismicity, with examples taken from past reservoir stimulation experiments (mostly from Enhanced Geothermal Systems, EGS). Then the decision strategy is formalized: Being integrable, the model yields a closed-form ATLS solution that maps a risk-based safety standard or norm to an earthquake magnitude not to exceed during stimulation. Finally, the EGS levelized cost of electricity (LCOE) is reformulated in terms of null expectation, with the cost of abandoned injection-well implemented. We find that the price increase to mitigate the increased seismic risk in populated areas can counterbalance the heat credit. However this "public safety cost" disappears if buildings are based on earthquake-resistant designs or if a more relaxed risk safety standard or norm is chosen.Comment: 8 pages, 4 figures, conference (International Symposium on Energy Geotechnics, 26-28 September 2018, Lausanne, Switzerland

    Some , And Possibly All, Scalar Inferences Are Not Delayed: Evidence For Immediate Pragmatic Enrichment

    Get PDF
    Scalar inferences are commonly generated when a speaker uses a weaker expression rather than a stronger alternative, e.g., John ate some of the apples implies that he did not eat them all. This article describes a visual-world study investigating how and when perceivers compute these inferences. Participants followed spoken instructions containing the scalar quantifier some directing them to interact with one of several referential targets (e.g., Click on the girl who has some of the balloons). Participants fixated on the target compatible with the implicated meaning of some and avoided a competitor compatible with the literal meaning prior to a disambiguating noun. Further, convergence on the target was as fast for some as for the non-scalar quantifiers none and all. These findings indicate that the scalar inference is computed immediately and is not delayed relative to the literal interpretation of some. It is argued that previous demonstrations that scalar inferences increase processing time are not necessarily due to delays in generating the inference itself, but rather arise because integrating the interpretation of the inference with relevant information in the context may require additional time. With sufficient contextual support, processing delays disappear
    corecore