64 research outputs found

    Temporal Information Models for Real-Time Microblog Search

    Get PDF
    Real-time search in Twitter and other social media services is often biased towards the most recent results due to the “in the moment” nature of topic trends and their ephemeral relevance to users and media in general. However, “in the moment”, it is often difficult to look at all emerging topics and single-out the important ones from the rest of the social media chatter. This thesis proposes to leverage on external sources to estimate the duration and burstiness of live Twitter topics. It extends preliminary research where itwas shown that temporal re-ranking using external sources could indeed improve the accuracy of results. To further explore this topic we pursued three significant novel approaches: (1) multi-source information analysis that explores behavioral dynamics of users, such as Wikipedia live edits and page view streams, to detect topic trends and estimate the topic interest over time; (2) efficient methods for federated query expansion towards the improvement of query meaning; and (3) exploiting multiple sources towards the detection of temporal query intent. It differs from past approaches in the sense that it will work over real-time queries, leveraging on live user-generated content. This approach contrasts with previous methods that require an offline preprocessing step

    Query expansion by relying on the structure of knowledge bases

    Get PDF
    Query expansion techniques aim at improving the results achieved by a user's query by means of introducing new expansion terms, called expansion features. Expansion features introduce new concepts that are semantically related with the concepts in the user's query and that allow retrieving documents that otherwise would be not. Thus, the challenge is to select those expansion features that are capable of improving the results the most. A bad choice of expansion features may be counterproductive. In this thesis, we use an external source of information, a Knowledge Base (KB), as source expansion features. A knowledge base consists of a set of entries, each of which represent a concept and has, at least, a name, which can be used as expansion feature. The techniques framed in this family have become more popular due to the increase of available data, as, for example, Wikipedia. Particularly, we focus on exploiting those KB whose entries are linked to each other, conforming a graph of entries. To the best of our knowledge, most of the techniques framed on the KB family rely on some kind of text analysis, such as explicit semantic analysis, or are based on other existing query expansion techniques such as pseudo relevance feedback. However, the underlying net-work structure of KBs has been barely exploited. In this thesis, we show that the structure can be used to identify reliable expansion feature for the query expansion process. Thus, we design a novel expansion technique, Structural Query Expansion (SQE). For SQE to benefit from the particular structures of KBs, we propose a methodology to identify the structural characteristics that, given a query, allow identifying those nodes in the KB that are good candidates to be used as source of expansion features, called from now on expansion nodes. The methodology consists in building a ground truth that connects each query from a query set with those nodes of the KB that when used to extract the expansion features allow achieving the best results in terms of precision, we call the set of those nodes, expansion query graph. Then, we compare the expansion query graph of each query to find shared characteristics. SQE materializes the revealed characteristics into a set of structural motifs. In the particular case of Wikipedia, we have found two motifs called triangular and square. In the former, the query node and the expansion node are doubly linked and the expansion node belongs to, at least, the same categories as the query node. In the latter, the query node and the expansion node also are doubly linked and their categories are connected somehow. These motifs are used to, given a query and its query nodes, identify all the expansion nodes which are used as source of expansion features. Notice that we have designed this technique to be orthogonal to others because is fully decoupled from the search process and does not depend on the particular collection of documents. We have tested our techniques with three different datasets to avoid any kind of overfitting. The results are shown to be consistent among the three of them. Also, the results which are validated with statistical significance tests, show that SQE is capable to achieve up to 150% improvement in the precision. Finally, we show the performance of our technique which runs in sub-second times (358.23ms at maximum) which makes it feasible for a real query expansion system. This is especially relevant because, to the best of our knowledge, the performance is an aspect that is being ignored in most of the works and, thus, it is difficult to know whether they can be include in real systems or not.Les tècniques d'expansió de consultes tenen com a objecte millorar els resultats obtinguts per la consulta d'un usuari a partir de la introducció de termes d'expansió, anomenat característiques d'expansió. Les característiques d'expansió introdueixen nous conceptes que estan relacionats semànticament amb els conceptes de la consulta de l'usuari i que permeten obtenir documents que d'altra manera no es podrien obtenir. Per tant, el repte és seleccionar les característiques d'expansió que són capaces de millorar al màxim els resultats, doncs una mala elecció pot ser contra-productiva. En aquesta tesis, utilitzem una font externa d'informació, una Base de Coneixement (KB), com a font de característiques d'expansió. Una KB és un conjunt d'entrades, cadascuna de les quals representa un concepte i que té, com a mínim, un nom, que és susceptible de ser usat com a característica d'expansió. Les tècniques emmarcades en aquesta família han esdevingut populars degut al creixement de la informació disponible, per exemple, Wikipedia. Particularment, nosaltres en centrem en utilitzar aquelles KB les entrades de les quals estan relacionades entre si, conformant d'aquesta manera, un graf d'entrades. Segons les nostres informacions, la majora de les tècniques emmarcades en aquesta família utilitzen algun tipus d'anàlisi lingüístic, o estan basades en d'altres tècniques com relevance feedback. Ara bé, la estructura subjacent de la xarxa gairebé no s'ha utilitzat. En aquesta tesis, mostrem que la estructura es pot fer servir per identificar característiques d'expansió fiables pel procés d'expansió de consultes. De fet, proposem una tècnica d'expansió novell, Structural Query Expansion (SQE), que la explota. Perquè SQE pugui beneficiar-se de les particularitats estructurals de les KBs, hem proposat també una metodologia per revelar les característiques estructurals que, donada una consulta, permeten identificar aquells nodes que són una bona font de característiques d'expansió, els anomenats, nodes d'expansió. Aquesta metodologia consisteix en construir un ground truth que relaciona una conjunt de consultes amb el seu optimal expansion query graph. L'optimal expansion query graph és el conjunt de nodes d'expansió que quan s'utilitzen com a font de característiques d'expansió, permeten obtenir els millors resultats en termes de precisió. Un cop tenim els optimal expansion query graphs, els comparem entre si per a buscar característiques compartides. SQE materialitza aquestes característiques en un conjunt de motius estructurals. En el cas de Wikipedia hem trobat 2 motius: el triangular i el quadràtic. En els dos casos el node de la consulta ha d'estar doblement lincat amb el node d'expansió. En el triangular, les categories del node d'expansió ha de pertànyer, com a mínim, a les mateixes categories que el node de la consulta, mentre que en el quadràtic tan sols cal que les categories del node de la consulta i el d'expansió estiguin relacionades. Aquest motius s'utilitzen per, donada una consulta, identificar tots els seus nodes d'expansió. Hem dissenyat aquesta tècnica com una tècnica ortogonal a d'altres ja que està desacoblada del procés de cerca i no depèn de la col·lecció de documents. Hem provar la nostra tècnica amb 3 jocs de dades diferents per a evitar qualsevol tipus d'especialització. Els resultats són consistents entre els tres. Hem validat els resultats amb testos de significança estadística obtenint millores del 150% en la precisió. Finalment, pel que fa el rendiment de la nostra proposta, mostrem que s'executa en mil·lisegons, i això la fa susceptible de ser utilitzada en sistemes d'expansió reals. Això és especialment rellevant perquè, segons les nostres informacions, aquest és un aspecte que s'ignora en la literatura i, per tant, és difícil de saber la viabilitat de les propostes que existeixen en entorns reals

    A Reconsideration of Full-Cost Pricing

    Get PDF
    The wide use of full-cost pricing techniques remains an explanandum in both economics and management accounting theory. This work surveys and develops possible theoretical explanations of this industrial pricing behaviour and analyses some of its implications. By recognition of the widespread use of imperfect cost-plus pricing heuristics, observable pricing behaviour, as well as empirical market-level phenomena, can be explained. Furthermore, methodological aspects of marginalist price theory are discussed and brought into connection with the meager contemporary interest of most economists in the important phenomenon of full-cost pricing

    Was Suchmaschinen nicht können. Holistische Entitätssuche auf Web Daten

    Get PDF
    Mehr als 50% aller Web Suchanfragen sind entitätsbezogen. Benutzer suchen entweder nach Entitäten oder nach Entitätsinformationen. Dennoch solche Anfragen von Suchmaschinen nicht gut unterstützt. Aufbauend auf dem Konzept des semiotischen Dreiecks aus der kognitiven Psychologie, haben wir drei Anfragetypen zur Entitätssuche identifiziert: typbasierte Anfragen – Suche nach Entitäten eines gegebenen Typs, prototypbasierte Anfragen – Suche nach Entitäten mit bestimmten Eigenschaften, und instanzbasierte Anfragen – Suche nach Entitäten die ähnlich zu einer gegebene Entität sind. Für typbasierte Anfragen haben wir eine Methode entwickelt die query expansion mit einer self-supervised vocabulary learning Technik auf strukturierten und unstrukturierten Daten verbindet. Unser Ansatz liefert einen guten Kompromiss zwischen Precision und Recall. Für prototypbasierte Anfragen stellen wir ProSWIP vor. Dies ist ein eigenschaftsbasiertes System um Entitäten aus dem Web abzurufen. Da aber die Anzahl der Eigenschaften die durch die Benutzer bereitgestellt werden relativ klein sein kann, baut ProSWIP auf direkten Fragen und Benutzer Feedback um die Menge der Eigenschaften zu einer Menge welche die Intentionen der Benutzer korrekt erfasst zu erweitern. Unsere Experimente zeigen dass mit maximal vier Fragen eine perfekte Precision erreicht wird. In dem Fall von instanzbasierten Anfragen besteht die Schwierigkeit darin eine Anfrageform zu finden die die Benutzerintentionen eindeutig macht. Wir stellen eine minimalistische instanzbasierte Anfrage, die aus einem Beispiel und dem entsprechenden Entitätstypen besteht vor. Mit Hilfe des Konzepts der Familienähnlichkeit entwickeln wir eine praktische Lösung um Entitäten mit Bezug zur der Anfragenentität direkt aus dem Web abzurufen. Unser Ansatz erzielt sogar für Anfragen, die für standard Entitätssuchaufgaben wie related entity finding problematisch waren, gute Ergebnisse. Entitätszusammenfassung ist ein anderer Typ von entitätszentrischen Anfragen, der Informationen bezüglich einer Entität bereitstellt. Googles Knowledge Graph ist der Stand der Technik für solche Aufgaben. Aber das Zurückgreifen auf manuell erstellte Knowledgebases schließt weniger bekannten Entitäten für das Knowledge Graph aus. Wir schlagen daher vor datengetriebene Ansätze zu nutzen. Wir sind überzeugt dass das Bewältigen dieser vier Anfragetypen eine holistische Entitätssuche auf Web Daten für die nächste Generation von Suchmaschinen ermöglicht.More than 50% of all Web queries are entity related. Users search either for entities or for entity information. Still, search engines do not accommodate entity-centric search very well. Building on the concept of the semiotic triangle from cognitive psychology, which models entity types in terms of intensions and extensions, we identified three types of queries for retrieving entities: type-based queries - searching for entities of a given type, prototype-based queries - searching for entities having certain properties, and instance-based queries - searching for entities being similar to a given entity. For type-based queries we present a method that combines query expansion with a self-supervised vocabulary learning technique built on both structured and unstructured data. Our approach is able to achieve a good tradeoff between precision and recall. For prototype-based queries we propose ProSWIP, a property-based system for retrieving entities from the Web. Since the number of properties given by the users can be quite small, ProSWIP relies on direct questions and user feedback to expand the set of properties to a set that captures the user’s intentions correctly. Our experiments show that within a maximum of four questions the system achieves perfect precision of the selected entities. In the case of instance-based queries the first challenge is to establish a query form that allows for disambiguating user intentions without putting too much cognitive pressure on the user. We propose a minimalistic instance-based query comprising the example entity and intended entity type. With this query and building on the concept of family resemblance we present a practical way for retrieving entities directly from the Web. Our approach can even cope with queries which have proven problematic for benchmark tasks like related entity finding. Providing information about a given entity, entity summarization is another kind of entity-centric query. Google’s Knowledge Graph is the state of the art for this task. But relying entirely on manually curated knowledge bases, the Knowledge Graph does not include all new and less known entities. We propose to use a data-driven approach. Our experiments on real-world entities show the superiority of our method. We are confident that mastering these four query types enables holistic entity search on Web data for the next generation of search engines

    A Reconsideration of Full-Cost Pricing

    Get PDF
    The wide use of full-cost pricing techniques remains an explanandum in both economics and management accounting theory. This work surveys and develops possible theoretical explanations of this industrial pricing behaviour and analyses some of its implications. By recognition of the widespread use of imperfect cost-plus pricing heuristics, observable pricing behaviour, as well as empirical market-level phenomena, can be explained. Furthermore, methodological aspects of marginalist price theory are discussed and brought into connection with the meager contemporary interest of most economists in the important phenomenon of full-cost pricing.Full-Cost Pricing; Cost-Plus Pricing; Pricing Behaviour; Marginalism; Heuristics

    Development of a Coal Quality Expert

    Full text link

    CQE: an approach to automatically estimate the code quality using an objective metric from an empirical study

    Get PDF
    Bugs in a project, at any stage of Software life cycle development are costly and difficult to find and fix. Moreover, the later a bug is found, the more expensive it is to fix. There are static analysis tools to ease the process of finding bugs, but their results are not easy to filter out critical errors and is time consuming to analyze. To solve this problem we used two steps: first to enhance the bugs severity and second is to estimate the code quality, byWeighted Error Code Density metric. Our experiment on 10 widely used open-source Java applications automatically shows their code quality estimated using our objective metric. We also enhance the error ranking of FindBugs, and provide a clear view on the critical errors to fix as well as low priority ones to potentially ignore

    Building Fully Adaptive Stochastic Models for Multiphase Blowdown Simulations.

    Full text link
    A new method for uncertainty quantification (UQ) that combines adaptivity in the physical (deterministic) space and the stochastic space is presented. The sampling-based method adaptively refines the physical discretizations of the simulations, along with adaptively building a stochastic model and adding samples. By adaptively refining the physical and stochastic models, errors from both spaces are balanced. The UQ method takes advantage of an active linear subspace to reduce the dimensionality of the stochastic space while retaining relevant interaction terms and anisotropy. Driven by low-cost error estimates, a particle-swarm optimization method explores the stochastic space and drives adaptation that results in an efficient stochastic approximation. The UQ method is compared to to two modern methods for three test functions in a 100-dimensional space. The current method is shown to result in up to three orders of magnitude lower error and up to two orders of magnitude fewer samples. Next, a simulation is developed using the discontinuous Galerkin method which is well-suited to adaptivity. A transient multiphase flashing flow model is used to simulate the Edwards-O'Brien blowdown problem. The adjoint equations are successfully solved and used to drive a space-time anisotropic adaptation based on a complex output of interest. This results in an efficient phsyical discretization. Finally, the UQ method is used to assess modeling and discretization errors in a modified multiphase flow simulation. Based on an overall stochastic output of interest, the UQ method simultaneously drives adaptation of the stochastic and deterministic discretizations in order to balance the two sources of error. That is, terms are added to the stochastic model, samples are added, and the physical grid of each individual simulation is refined simultaneously. Error estimates based on semi-refined discretizations retain anisotropic accuracy, and a common grid is used to compare solutions from samples. The method for combined adaptivity performs well on the test problem, reducing the stochastic dimensionality from 20 to two and reducing deterministic errors on select samples. For about the same computational time, the method results in an order of magnitude less error and an order of magnitude fewer degrees of freedom compared to three other methods.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111612/1/isaaca_1.pd

    Collected software engineering papers, volume 8

    Get PDF
    A collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) during the period November 1989 through October 1990 is presented. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. Additional information about the SEL and its research efforts may be obtained from the sources listed in the bibliography. The seven presented papers are grouped into four major categories: (1) experimental research and evaluation of software measurement; (2) studies on models for software reuse; (3) a software tool evaluation; and (4) Ada technology and studies in the areas of reuse and specification
    corecore