504 research outputs found

    Transitivity, Time Consumption, and Quality of Preference Judgments in Crowdsourcing

    Get PDF
    Preference judgments have been demonstrated as a better alternative to graded judgments to assess the relevance of documents relative to queries. Existing work has verified transitivity among preference judgments when collected from trained judges, which reduced the number of judgments dramatically. Moreover, strict preference judgments and weak preference judgments, where the latter additionally allow judges to state that two documents are equally relevant for a given query, are both widely used in literature. However, whether transitivity still holds when collected from crowdsourcing, i.e., whether the two kinds of preference judgments behave similarly remains unclear. In this work, we collect judgments from multiple judges using a crowdsourcing platform and aggregate them to compare the two kinds of preference judgments in terms of transitivity, time consumption, and quality. That is, we look into whether aggregated judgments are transitive, how long it takes judges to make them, and whether judges agree with each other and with judgments from TREC. Our key findings are that only strict preference judgments are transitive. Meanwhile, weak preference judgments behave differently in terms of transitivity, time consumption, as well as of quality of judgment

    A Probabilistic Framework for Time-Sensitive Search

    No full text

    Diversifying Search Results Using Time

    No full text
    Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore longitudinal document collections by querying for entities or events without knowing associated important dates apriori. In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We present a novel and objective evaluation for our proposed approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present search results diversified along time

    Leveraging Semantic Annotations to Link Wikipedia and News Archives

    No full text
    The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best

    Extracting Contextualized Quantity Facts from Web Tables

    Get PDF

    {YAGO}2: A Spatially and Temporally Enhanced Knowledge Base from {Wikipedia}

    Get PDF
    We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space

    Improved Implementation of Point Location in General Two-Dimensional Subdivisions

    Full text link
    We present a major revamp of the point-location data structure for general two-dimensional subdivisions via randomized incremental construction, implemented in CGAL, the Computational Geometry Algorithms Library. We can now guarantee that the constructed directed acyclic graph G is of linear size and provides logarithmic query time. Via the construction of the Voronoi diagram for a given point set S of size n, this also enables nearest-neighbor queries in guaranteed O(log n) time. Another major innovation is the support of general unbounded subdivisions as well as subdivisions of two-dimensional parametric surfaces such as spheres, tori, cylinders. The implementation is exact, complete, and general, i.e., it can also handle non-linear subdivisions. Like the previous version, the data structure supports modifications of the subdivision, such as insertions and deletions of edges, after the initial preprocessing. A major challenge is to retain the expected O(n log n) preprocessing time while providing the above (deterministic) space and query-time guarantees. We describe an efficient preprocessing algorithm, which explicitly verifies the length L of the longest query path in O(n log n) time. However, instead of using L, our implementation is based on the depth D of G. Although we prove that the worst case ratio of D and L is Theta(n/log n), we conjecture, based on our experimental results, that this solution achieves expected O(n log n) preprocessing time.Comment: 21 page

    Flavopiridol Protects Against Inflammation by Attenuating Leukocyte-Endothelial Interaction via Inhibition of Cyclin-Dependent Kinase 9

    Get PDF
    Objective: The cyclin-dependent kinase (CDK) inhibitor flavopiridol is currently being tested in clinical trials as anticancer drug. Beyond its cell death–inducing action, we hypothesized that flavopiridol affects inflammatory processes. Therefore, we elucidated the action of flavopiridol on leukocyte–endothelial cell interaction and endothelial activation in vivo and in vitro and studied the underlying molecular mechanisms. Methods and Results: Flavopiridol suppressed concanavalin A–induced hepatitis and neutrophil infiltration into liver tissue. Flavopiridol also inhibited tumor necrosis factor-α–induced leukocyte– endothelial cell interaction in the mouse cremaster muscle. Endothelial cells were found to be the major target of flavopiridol, which blocked the expression of endothelial cell adhesion molecules (intercellular adhesion molecule-1, vascular cell adhesion molecule-1, and E-selectin), as well as NF-κB-dependent transcription. Flavopiridol did not affect inhibitor of κB (IκB) kinase, the degradation and phosphorylation of IκBα, nuclear translocation of p65, or nuclear factor-κB (NF-κB) DNA-binding activity. By performing a cellular kinome array and a kinase activity panel, we found LIM domain kinase-1 (LIMK1), casein kinase 2, c-Jun N-terminal kinase (JNK), protein kinase Cθ (PKCθ), CDK4, CDK6, CDK8, and CDK9 to be influenced by flavopiridol. Using specific inhibitors, as well as RNA interference (RNAi), we revealed that only CDK9 is responsible for the action of flavopiridol. Conclusion: Our study highlights flavopiridol as a promising antiinflammatory compound and inhibition of CDK9 as a novel approach for the treatment of inflammation-associated diseases

    Well-balanced treatment of gravity in astrophysical fluid dynamics simulations at low Mach numbers

    Full text link
    Accurate simulations of flows in stellar interiors are crucial to improving our understanding of stellar structure and evolution. Because the typically slow flows are merely tiny perturbations on top of a close balance between gravity and the pressure gradient, such simulations place heavy demands on numerical hydrodynamics schemes. We demonstrate how discretization errors on grids of reasonable size can lead to spurious flows orders of magnitude faster than the physical flow. Well-balanced numerical schemes can deal with this problem. Three such schemes were applied in the implicit, finite-volume Seven-League Hydro (SLH) code in combination with a low-Mach-number numerical flux function. We compare how the schemes perform in four numerical experiments addressing some of the challenges imposed by typical problems in stellar hydrodynamics. We find that the α\alpha-β\beta and deviation well-balancing methods can accurately maintain hydrostatic solutions provided that gravitational potential energy is included in the total energy balance. They accurately conserve minuscule entropy fluctuations advected in an isentropic stratification, which enables the methods to reproduce the expected scaling of convective flow speed with the heating rate. The deviation method also substantially increases accuracy of maintaining stationary orbital motions in a Keplerian disk on long timescales. The Cargo-LeRoux method fares substantially worse in our tests, although its simplicity may still offer some merits in certain situations. Overall, we find the well-balanced treatment of gravity in combination with low Mach number flux functions essential to reproducing correct physical solutions to challenging stellar slow-flow problems on affordable collocated grids.Comment: Accepted for publication in A&
    • …
    corecore