923 research outputs found

    An expert system for safety instrumented system in petroleum industry

    Get PDF
    The expert system technology has been developed since 1960s and now it has proven to be a useful and effective tool in many areas. It helps shorten the time required to accomplish a certain job and relieve the workload for human staves by implement the task automatically. This master thesis gives general introduction about the expert system and the technologies involved with it. We also discussed the framework of the expert system and how it will interact with the existing cause and effect matrix. The thesis describes a way of implementing automatic textual verification and the possibility of automatic information extraction in the designing process of safety instrumented systems. We use the Protégé application [*] to make models for the Cause and Effect Matrix and use XMLUnit to implement the comparison between two files of interest

    DYNIQX: A novel meta-search engine for the web

    Get PDF
    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based search. Dyniqx integrates search results from search services of documents, images, and videos for generating a unified list of ranked search results. Dyniqx exploits the availability of metadata in search services such as PubMed, Google Scholar, Google Image Search, and Google Video Search etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are used by users to filter search results. Our preliminary user evaluation shows that Dyniqx can help users complete information search tasks more efficiently and successfully than three well known search engines respectively. We also carried out one controlled user evaluation of the integration of six document/image/video based search engines (Google Scholar, PubMed, Intute, Google Image, Yahoo Image, and Google Video) in Dyniqx. We designed a questionnaire for evaluating different aspect of Dyniqx in assisting users complete search tasks. Each user used Dyniqx to perform a number of search tasks before completing the questionnaire. Our evaluation results confirm the effectiveness of the meta-search of Dyniqx in assisting user search tasks, and provide insights into better designs of the Dyniqx' interface

    Social Search with Missing Data: Which Ranking Algorithm?

    Get PDF
    Online social networking tools are extremely popular, but can miss potential discoveries latent in the social 'fabric'. Matchmaking services which can do naive profile matching with old database technology are too brittle in the absence of key data, and even modern ontological markup, though powerful, can be onerous at data-input time. In this paper, we present a system called BuddyFinder which can automatically identify buddies who can best match a user's search requirements specified in a term-based query, even in the absence of stored user-profiles. We deploy and compare five statistical measures, namely, our own CORDER, mutual information (MI), phi-squared, improved MI and Z score, and two TF/IDF based baseline methods to find online users who best match the search requirements based on 'inferred profiles' of these users in the form of scavenged web pages. These measures identify statistically significant relationships between online users and a term-based query. Our user evaluation on two groups of users shows that BuddyFinder can find users highly relevant to search queries, and that CORDER achieved the best average ranking correlations among all seven algorithms and improved the performance of both baseline methods

    Thoughts on the Construction of Beautiful Villages with Poverty Alleviation in the Perspective

    Get PDF
    Accurate poverty alleviation has become an important task in implementing the rural revitalization strategy. Since the 19th CPC National Congress, Chinese government institutions have been striving to take measures to lift poor rural areas out of poverty. This essay takes Tailai district as the blueprint to start the research on precision poverty alleviation, explores and discusses the construction of beautiful villages, proposes strategies for sustainable development, makes people change concepts to coordinate the relationship between interests and concepts. It also points out the target that using the industry as a guide, using technology to alleviate poverty and make the village vibrant. Therefore, the endogenous power will be derived from the roots, and the agriculture, farmer and rural area will be fed back, in order to provide a reference for the Construction of Beautiful Villages in Heilongjiang

    The Open University at TREC 2007 Enterprise Track

    Get PDF
    The Multimedia and Information Systems group at the Knowledge Media Institute of the Open University participated in the Expert Search and Document Search tasks of the Enterprise Track in TREC 2007. In both the document and expert search tasks, we have studied the effect of anchor texts in addition to document contents, document authority, url length, query expansion, and relevance feedback in improving search effectiveness. In the expert search task, we have continued using a two-stage language model consisting of a document relevance and cooccurrence models. The document relevance model is equivalent to our approach in the document search task. We have used our innovative multiple-window-based cooccurrence approach. The assumption is that there are multiple levels of associations between an expert and his/her expertise. Our experimental results show that the introduction of additional features in addition to document contents has improved the retrieval effectiveness

    Analysing the Noise Model Error for Realistic Noisy Label Data

    Full text link
    Distant and weak supervision allow to obtain large amounts of labeled training data quickly and cheaply, but these automatic annotations tend to contain a high amount of errors. A popular technique to overcome the negative effects of these noisy labels is noise modelling where the underlying noise process is modelled. In this work, we study the quality of these estimated noise models from the theoretical side by deriving the expected error of the noise model. Apart from evaluating the theoretical results on commonly used synthetic noise, we also publish NoisyNER, a new noisy label dataset from the NLP domain that was obtained through a realistic distant supervision technique. It provides seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances. Parallel, clean labels are available making it possible to study scenarios where a small amount of gold-standard data can be leveraged. Our theoretical results and the corresponding experiments give insights into the factors that influence the noise model estimation like the noise distribution and the sampling technique.Comment: Accepted at AAAI 2021, additional material at https://github.com/uds-lsv/noise-estimatio
    • …
    corecore