37,399 research outputs found

    Diversity, Assortment, Dissimilarity, Variety: A Study of Diversity Measures Using Low Level Features for Video Retrieval

    Get PDF
    In this paper we present a number of methods for re-ranking video search results in order to introduce diversity into the set of search results. The usefulness of these approaches is evaluated in comparison with similarity based measures, for the TRECVID 2007 collection and tasks [11]. For the MAP of the search results we find that some of our approaches perform as well as similarity based methods. We also find that some of these results can improve the P@N values for some of the lower N values. The most successful of these approaches was then implemented in an interactive search system for the TRECVID 2008 interactive search tasks. The responses from the users indicate that they find the more diverse search results extremely useful

    Grounding semantics in robots for Visual Question Answering

    Get PDF
    In this thesis I describe an operational implementation of an object detection and description system that incorporates in an end-to-end Visual Question Answering system and evaluated it on two visual question answering datasets for compositional language and elementary visual reasoning

    Drug prescription support in dental clinics through drug corpus mining

    Get PDF
    The rapid increase in the volume and variety of data poses a challenge to safe drug prescription for the dentist. The increasing number of patients that take multiple drugs further exerts pressure on the dentist to make the right decision at point-of-care. Hence, a robust decision support system will enable dentists to make decisions on drug prescription quickly and accurately. Based on the assumption that similar drug pairs have a higher similarity ratio, this paper suggests an innovative approach to obtain the similarity ratio between the drug that the dentist is going to prescribe and the drug that the patient is currently taking. We conducted experiments to obtain the similarity ratios of both positive and negative drug pairs, by using feature vectors generated from term similarities and word embeddings of biomedical text corpus. This model can be easily adapted and implemented for use in a dental clinic to assist the dentist in deciding if a drug is suitable for prescription, taking into consideration the medical profile of the patients. Experimental evaluation of our model’s association of the similarity ratio between two drugs yielded a superior F score of 89%. Hence, such an approach, when integrated within the clinical work flow, will reduce prescription errors and thereby increase the health outcomes of patients

    Automated legal sensemaking: the centrality of relevance and intentionality

    Get PDF
    Introduction: In a perfect world, discovery would ideally be conducted by the senior litigator who is responsible for developing and fully understanding all nuances of their client’s legal strategy. Of course today we must deal with the explosion of electronically stored information (ESI) that never is less than tens-of-thousands of documents in small cases and now increasingly involves multi-million-document populations for internal corporate investigations and litigations. Therefore scalable processes and technologies are required as a substitute for the authority’s judgment. The approaches taken have typically either substituted large teams of surrogate human reviewers using vastly simplified issue coding reference materials or employed increasingly sophisticated computational resources with little focus on quality metrics to insure retrieval consistent with the legal goal. What is required is a system (people, process, and technology) that replicates and automates the senior litigator’s human judgment. In this paper we utilize 15 years of sensemaking research to establish the minimum acceptable basis for conducting a document review that meets the needs of a legal proceeding. There is no substitute for a rigorous characterization of the explicit and tacit goals of the senior litigator. Once a process has been established for capturing the authority’s relevance criteria, we argue that literal translation of requirements into technical specifications does not properly account for the activities or states-of-affairs of interest. Having only a data warehouse of written records, it is also necessary to discover the intentions of actors involved in textual communications. We present quantitative results for a process and technology approach that automates effective legal sensemaking

    Profiling of OCR'ed Historical Texts Revisited

    Full text link
    In the absence of ground truth it is not possible to automatically determine the exact spectrum and occurrences of OCR errors in an OCR'ed text. Yet, for interactive postcorrection of OCR'ed historical printings it is extremely useful to have a statistical profile available that provides an estimate of error classes with associated frequencies, and that points to conjectured errors and suspicious tokens. The method introduced in Reffle (2013) computes such a profile, combining lexica, pattern sets and advanced matching techniques in a specialized Expectation Maximization (EM) procedure. Here we improve this method in three respects: First, the method in Reffle (2013) is not adaptive: user feedback obtained by actual postcorrection steps cannot be used to compute refined profiles. We introduce a variant of the method that is open for adaptivity, taking correction steps of the user into account. This leads to higher precision with respect to recognition of erroneous OCR tokens. Second, during postcorrection often new historical patterns are found. We show that adding new historical patterns to the linguistic background resources leads to a second kind of improvement, enabling even higher precision by telling historical spellings apart from OCR errors. Third, the method in Reffle (2013) does not make any active use of tokens that cannot be interpreted in the underlying channel model. We show that adding these uninterpretable tokens to the set of conjectured errors leads to a significant improvement of the recall for error detection, at the same time improving precision

    Enabling Quality Control for Entity Resolution: A Human and Machine Cooperation Framework

    Full text link
    Even though many machine algorithms have been proposed for entity resolution, it remains very challenging to find a solution with quality guarantees. In this paper, we propose a novel HUman and Machine cOoperation (HUMO) framework for entity resolution (ER), which divides an ER workload between the machine and the human. HUMO enables a mechanism for quality control that can flexibly enforce both precision and recall levels. We introduce the optimization problem of HUMO, minimizing human cost given a quality requirement, and then present three optimization approaches: a conservative baseline one purely based on the monotonicity assumption of precision, a more aggressive one based on sampling and a hybrid one that can take advantage of the strengths of both previous approaches. Finally, we demonstrate by extensive experiments on real and synthetic datasets that HUMO can achieve high-quality results with reasonable return on investment (ROI) in terms of human cost, and it performs considerably better than the state-of-the-art alternatives in quality control.Comment: 12 pages, 11 figures. Camera-ready version of the paper submitted to ICDE 2018, In Proceedings of the 34th IEEE International Conference on Data Engineering (ICDE 2018
    corecore