2,586 research outputs found

    ,

    Get PDF
    - Utilización de aguas residuales regeneradas en el riego de cultivos: una necesidad y una solución. - Influencia del caolín para el control del estrés hídrico en olivar. - Fertirrigación con riego localizado: un sistema en expansión (2ª parte). - Cuaderno de campo de riego: una herramienta para la programación. - Riego deficitario: experiencia en cítricos.<br

    The behavior of quadratic and differential forms under function field extensions in characteristic two

    Get PDF
    AbstractLet F be a field of characteristic 2. Let ΩnF be the F-space of absolute differential forms over F. There is a homomorphism ℘:ΩnF→ΩnF/dΩn−1F given by ℘(xdx1/x1∧⋯∧dxn/xn)=(x2−x)dx1/x1∧⋯∧dxn/xnmoddΩFn−1. Let Hn+1(F)=Coker(℘). We study the behavior of Hn+1(F) under the function field F(φ)/F, where φ=〈〈b1,…,bn〉〉 is an n-fold Pfister form and F(φ) is the function field of the quadric φ=0 over F. We show that ker(Hn+1(F)→Hn+1(F(φ)))=F·db1/b1∧⋯∧dbn/bn. Using Kato's isomorphism of Hn+1(F) with the quotient InWq(F)/In+1Wq(F), where Wq(F) is the Witt group of quadratic forms over F and I⊂W(F) is the maximal ideal of even-dimensional bilinear forms over F, we deduce from the above result the analogue in characteristic 2 of Knebusch's degree conjecture, i.e. InWq(F) is the set of all classes q with deg(q)⩾n

    Avoiding Pandemic Fears in the Subway and Conquering the Platypus.

    Get PDF
    Metagenomics is increasingly used not just to show patterns of microbial diversity but also as a culture-independent method to detect individual organisms of intense clinical, epidemiological, conservation, forensic, or regulatory interest. A widely reported metagenomic study of the New York subway suggested that the pathogens Yersinia pestis and Bacillus anthracis were part of the "normal subway microbiome." In their article in mSystems, Hsu and collaborators (mSystems 1(3):e00018-16, 2016, http://dx.doi.org/10.1128/mSystems.00018-16) showed that microbial communities on transit surfaces in the Boston subway system are maintained from a metapopulation of human skin commensals and environmental generalists and that reanalysis of the New York subway data with appropriate methods did not detect the pathogens. We note that commonly used software pipelines can produce results that lack prima facie validity (e.g., reporting widespread distribution of notorious endemic species such as the platypus or the presence of pathogens) but that appropriate use of inclusion and exclusion sets can avoid this issue

    A self-adapting latency/power tradeoff model for replicated search engines

    Get PDF
    For many search settings, distributed/replicated search engines deploy a large number of machines to ensure efficient retrieval. This paper investigates how the power consumption of a replicated search engine can be automatically reduced when the system has low contention, without compromising its efficiency. We propose a novel self-adapting model to analyse the trade-off between latency and power consumption for distributed search engines. When query volumes are high and there is contention for the resources, the model automatically increases the necessary number of active machines in the system to maintain acceptable query response times. On the other hand, when the load of the system is low and the queries can be served easily, the model is able to reduce the number of active machines, leading to power savings. The model bases its decisions on examining the current and historical query loads of the search engine. Our proposal is formulated as a general dynamic decision problem, which can be quickly solved by dynamic programming in response to changing query loads. Thorough experiments are conducted to validate the usefulness of the proposed adaptive model using historical Web search traffic submitted to a commercial search engine. Our results show that our proposed self-adapting model can achieve an energy saving of 33% while only degrading mean query completion time by 10 ms compared to a baseline that provisions replicas based on a previous day's traffic

    The use of patient feedback by hospital boards of directors: a qualitative study of two NHS hospitals in England

    Get PDF
    BACKGROUND: Although previous research suggests that different kinds of patient feedback are used in different ways to help improve the quality of hospital care, there have been no studies of the ways in which hospital boards of directors use feedback for this purpose. OBJECTIVES: To examine whether and how boards of directors of hospitals use feedback from patients to formulate strategy and to assure and improve the quality of care. METHODS: We undertook an in-depth qualitative study in two acute hospital National Health Service foundation trusts in England, purposively selected as contrasting examples of the collection of different kinds of patient feedback. We collected and analysed data from interviews with directors and other managers, from observation of board meetings, and from board papers and other documents. RESULTS: The two boards used in-depth qualitative feedback and quantitative feedback from surveys in different ways to help develop strategies, set targets for quality improvement and design specific quality improvement initiatives; but both boards made less subsequent use of any kinds of feedback to monitor their strategies or explicitly to assure the quality of services. DISCUSSION AND CONCLUSIONS: We have identified limitations in the uses of patient feedback by hospital boards that suggest that boards should review their current practice to ensure that they use the different kinds of patient feedback that are available to them more effectively to improve, monitor and assure the quality of care

    Ranking and clustering of nodes in networks with smart teleportation

    Get PDF
    Random teleportation is a necessary evil for ranking and clustering directed networks based on random walks. Teleportation enables ergodic solutions, but the solutions must necessarily depend on the exact implementation and parametrization of the teleportation. For example, in the commonly used PageRank algorithm, the teleportation rate must trade off a heavily biased solution with a uniform solution. Here we show that teleportation to links rather than nodes enables a much smoother trade-off and effectively more robust results. We also show that, by not recording the teleportation steps of the random walker, we can further reduce the effect of teleportation with dramatic effects on clustering.Comment: 10 pages, 7 figure

    The impact of using combinatorial optimisation for static caching of posting lists

    Get PDF
    Abstract. Caching posting lists can reduce the amount of disk I/O required to evaluate a query. Current methods use optimisation proce-dures for maximising the cache hit ratio. A recent method selects posting lists for static caching in a greedy manner and obtains higher hit rates than standard cache eviction policies such as LRU and LFU. However, a greedy method does not formally guarantee an optimal solution. We investigate whether the use of methods guaranteed, in theory, to find an approximately optimal solution would yield higher hit rates. Thus, we cast the selection of posting lists for caching as an integer linear pro-gramming problem and perform a series of experiments using heuristics from combinatorial optimisation (CCO) to find optimal solutions. Using simulated query logs we find that CCO yields comparable results to a greedy baseline using cache sizes between 200 and 1000 MB, with modest improvements for queries of length two to three

    Fast Searching in Packed Strings

    Get PDF
    Given strings PP and QQ the (exact) string matching problem is to find all positions of substrings in QQ matching PP. The classical Knuth-Morris-Pratt algorithm [SIAM J. Comput., 1977] solves the string matching problem in linear time which is optimal if we can only read one character at the time. However, most strings are stored in a computer in a packed representation with several characters in a single word, giving us the opportunity to read multiple characters simultaneously. In this paper we study the worst-case complexity of string matching on strings given in packed representation. Let mnm \leq n be the lengths PP and QQ, respectively, and let σ\sigma denote the size of the alphabet. On a standard unit-cost word-RAM with logarithmic word size we present an algorithm using time O\left(\frac{n}{\log_\sigma n} + m + \occ\right). Here \occ is the number of occurrences of PP in QQ. For m=o(n)m = o(n) this improves the O(n)O(n) bound of the Knuth-Morris-Pratt algorithm. Furthermore, if m=O(n/logσn)m = O(n/\log_\sigma n) our algorithm is optimal since any algorithm must spend at least \Omega(\frac{(n+m)\log \sigma}{\log n} + \occ) = \Omega(\frac{n}{\log_\sigma n} + \occ) time to read the input and report all occurrences. The result is obtained by a novel automaton construction based on the Knuth-Morris-Pratt algorithm combined with a new compact representation of subautomata allowing an optimal tabulation-based simulation.Comment: To appear in Journal of Discrete Algorithms. Special Issue on CPM 200
    corecore