19 research outputs found

    Comparison of conventional and High Velocity Compaction of alumina powders

    Get PDF
    International audienceCeramic compacts can be usually prepared by uniaxial pressing in a die made of stainless steel, but the pressure applied is limited and density gradients occur in many cases. Recently a new forming method in powder metallurgy, the High Velocity Compaction (HVC) has been applied to ceramic powders. This method is similar to conventional pressing but consists in making an ram falling down at a very high speed to the upper punch. The kinetic energy is converted into a strike that produces a high pressure in a really short time. By controlling the kinetic energy, it is possible to apply a desired pressure that can be extremely high (up to 1 GPa) without any damage for the tool. The aim of the study is to compare the process conditions and the properties of green compacts elaborated by the two methods (conventional and HVC) for a similar forming pressure: forming pressure, green density (homogeneity), pore size distribution of the tablets, and then the sintering behavior, the shrinkage, the final density and microstructure of the ceramic material are studied

    CARNIVORES PLEISTOCENES EN MÂCONNAIS. EXCuRSION du 16e SYMPOSIUM INTERNATIONAL DE L’OuRS ETDU LION DES CAVERNES (Azé, 2011)

    Get PDF
    International audienceThe excursion day of the 16th ICBLS provided participants with the opportunity to visit three major palaeontological sites of the Mâconnais: the Azé and Blanot Caves, and the Breccia site of Château, especially rich in big Carnivores, mainly ursids and felids

    LIA-iSmart at the TREC 2011 Entity track : Entity List Completion Using Contextual Unsupervised Scores for Candidate Entities Ranking

    No full text
    International audienceThis paper describes our participation in the Entity List Completion (ELC) task at Entity track 2011. Our approach combined the work done for the Related Entity Finding 2010 task with some new criteria as the proximity or the similarity between a candidate answer with the correct answers given as examples or their cooc-currences

    Quantification et identification des concepts implicites d'une requĂŞte

    No full text
    National audience2 LSIS-Aix-Marseille UniversitĂ© [email protected] RÉSUMÉ. Nous proposons dans cet article une mĂ©thode non supervisĂ©e pour l'identification et la modĂ©lisation de concepts associĂ©s Ă  une recherche d'information. Nous utilisons l'alloca-tion de Dirichlet latente (LDA), un modèle gĂ©nĂ©ratif probabiliste, pour dĂ©tecter les concepts implicites de la requĂŞte en utilisant les documents obtenus par un processus de retour de perti-nence simulĂ© (ou documents de feedback). Notre approche estime automatiquement le nombre de concepts ainsi que le nombre de documents de feedback sans aucun apprentissage prĂ©alable ni paramĂ©trage. Les concepts implicites sont pondĂ©rĂ©s afin de reflĂ©ter leur importance relative par rapport Ă  la requĂŞte et sont utilisĂ©s pour modifier l'ordre des documents renvoyĂ©s Ă  l'utili-sateur. Nous utilisons quatre sources d'information gĂ©nĂ©rales de natures diffĂ©rentes (web, jour-nalistique, encyclopĂ©dique) Ă  partir desquelles les documents de feedback sont extraits. Nous comparons diffĂ©rentes approches Ă©tat-de-l'art sur deux collections ad-hoc de TREC, et les rĂ©-sultats montrent que l'utilisation de concepts implicites identifiĂ©s par notre mĂ©thode amĂ©liore significativement les performances de recherche documentaire. ABSTRACT. In this paper we introduce an unsupervised method for mining and modeling latent search concepts. We use Latent Dirichlet Allocation (LDA), a generative probabilistic topic model, to exhibit highly-specific query-related topics from pseudo-relevant feedback documents. Our approach automatically estimates the number of latent concepts as well as the needed amount of feedback documents, without any prior training step. Latent concepts are then weighted to reflect their relative adequacy and are further used to automatically reformu-late the initial user query. We also explore the use of different types of sources of information for modeling the latent concepts. For this purpose, we use four general sources of information of various nature (web, news, encyclopedic) from which the feedback documents are extracted. We evaluate our approach over two large ad-hoc TREC collections, and results show that it significantly improves document retrieval effectiveness while best results are achieved by combining latent concepts modeled from all available sources. MOTS-CLÉS : Recherche contextuelle, modĂ©lisation thĂ©matique, retour de pertinenc

    A weakly-supervised detection of entity central documents in a stream

    No full text
    International audienceFiltering a time-ordered corpus for documents that are highly relevant to an entity is a task receiving more and more attention over the years. One application is to reduce the delay between the moment an information about an entity is being first observed and the moment the entity entry in a knowledge base is being updated. Current state-of-the-art approaches are highly supervised and require training examples for each entity monitored. We propose an approach which does not require new training data when processing a new entity. To capture intrinsic characteristics of highly relevant documents our approach relies on three types of features: document centric features, entity profile related features and time features. Evaluated within the framework of the " Knowledge Base Acceleration " track at TREC 2012, it outperforms current state-of-the-art approaches

    Do Social Information Help Book Search?

    No full text
    Abstract. In this paper we describe our participation in the INEX 2012 Book Track. The collection enters its second year of age and is composed of Amazon and LibraryThing entries for real books, and their associated user reviews, ratings and tags. Like in 2011, we tried a simple yet effective approach of reranking books using a social component that takes into account both popularity and ratings. We did experiments using tags as well.

    Vers une détection en temps réel de documents Webcentrés sur une entité donnée

    No full text
    International audience3 Université d'Aix-Marseille-LSIS 4 Kware {vincent.bouvier, patrice.bellot}@lsis.org RÉSUMÉ. La tâche de désambiguïsation des entités nommées consiste à lier une mention ambiguë d'une entité dans un document à l'entité correspondante dans une base de connaissances. Dans ce travail, nous nous plaçons dans un cadre applicatif "inverse" et nous ajoutons une contrainte temporelle : nous souhaitons surveiller un flux de nouveaux documents Web et déterminer quels sont ceux mentionnant une entité donnée tout en mesurant l'importance de l'information conte-nue. Une telle approche peut servir à recommander des documents à des contributeurs si une information mérite d'être ajoutée dans la base de connaissances cible. Notre approche repose sur l'utilisation de deux classifieurs prenant en compte, pour déterminer l'intérêt d'un document du flux, des indices comme la fréquence de mentions de l'entité dans le temps ou dans le document, sa position ou encore la présence d'entités liées connues. Notre approche et l'impact des paramètres utilisés ont été évalués via une participation à la tâche "Knowledge Base Acce-leration" de TREC 2012 et a positionné notre équipe au rang 3 sur 11 (Bonnefoy et al., 2012). ABSTRACT. Name entity disambiguation is the task of linking an ambiguous name in a document to the unique real-world entity in a kwnoledge base (KB) its represents. We took the opposite problem and add a time constraint : we monitor a data stream to detect in real-time documents about an entity from a KB and determine to what extent the information in those documents matter. It could be used to reduce time lag between the moment a new important information about an entity shows up and the moment it is added to the knowledge base. We used Random Forests combined with time-related features (eg. count of mentions in time) and document and related entities centric features to tackle this problem. The effectiveness and impact of the features used have been evaluated through our participation to the "Knowledge Base Acceleration" task at TREC 2012 and positionned our team rank 3 on 11 (Bonnefoy et al., 2012). MOTS-CLÉS : entité nommée, base de connaissances, kba, trec, flu
    corecore