3,549 research outputs found

    Visual Re-ranking with Natural Language Understanding for Text Spotting

    Get PDF
    Many scene text recognition approaches are based on purely visual information and ignore the semantic relation between scene and text. In this paper, we tackle this problem from natural language processing perspective to fill the gap between language and vision. We propose a post-processing approach to improve scene text recognition accuracy by using occurrence probabilities of words (unigram language model), and the semantic correlation between scene and text. For this, we initially rely on an off-the-shelf deep neural network, already trained with a large amount of data, which provides a series of text hypotheses per input image. These hypotheses are then re-ranked using word frequencies and semantic relatedness with objects or scenes in the image. As a result of this combination, the performance of the original network is boosted with almost no additional cost. We validate our approach on ICDAR'17 dataset.Comment: Accepted by ACCV 2018. arXiv admin note: substantial text overlap with arXiv:1810.0977

    Visual re-ranking with natural language understanding for text spotting

    Get PDF
    The final publication is available at link.springer.comMany scene text recognition approaches are based on purely visual information and ignore the semantic relation between scene and text. In this paper, we tackle this problem from natural language processing perspective to fill the gap between language and vision. We propose a post processing approach to improve scene text recognition accuracy by using occurrence probabilities of words (unigram language model), and the semantic correlation between scene and text. For this, we initially rely on an off-the-shelf deep neural network, already trained with large amount of data, which provides a series of text hypotheses per input image. These hypotheses are then re-ranked using word frequencies and semantic relatedness with objects or scenes in the image. As a result of this combination, the performance of the original network is boosted with almost no additional cost. We validate our approach on ICDAR'17 dataset.Peer ReviewedPostprint (author's final draft

    Search Engine Similarity Analysis: A Combined Content and Rankings Approach

    Full text link
    How different are search engines? The search engine wars are a favorite topic of on-line analysts, as two of the biggest companies in the world, Google and Microsoft, battle for prevalence of the web search space. Differences in search engine popularity can be explained by their effectiveness or other factors, such as familiarity with the most popular first engine, peer imitation, or force of habit. In this work we present a thorough analysis of the affinity of the two major search engines, Google and Bing, along with DuckDuckGo, which goes to great lengths to emphasize its privacy-friendly credentials. To do so, we collected search results using a comprehensive set of 300 unique queries for two time periods in 2016 and 2019, and developed a new similarity metric that leverages both the content and the ranking of search responses. We evaluated the characteristics of the metric against other metrics and approaches that have been proposed in the literature, and used it to (1) investigate the similarities of search engine results, (2) the evolution of their affinity over time, (3) what aspects of the results influence similarity, and (4) how the metric differs over different kinds of search services. We found that Google stands apart, but Bing and DuckDuckGo are largely indistinguishable from each other.Comment: Shorter version of this paper was accepted in the 21st International Conference on Web Information Systems Engineering (WISE 2020). The final authenticated version is available online at https://doi.org/10.1007/978-3-030-62008-0_

    A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube

    Full text link
    Contrary to Google Search's mission of delivering information from "many angles so you can form your own understanding of the world," we find that Google and its most prominent returned results -- Wikipedia and YouTube, simply reflect the narrow set of cultural stereotypes tied to the search language for complex topics like "Buddhism," "Liberalism," "colonization," "Iran" and "America." Simply stated, they present, to varying degrees, distinct information across the same search in different languages (we call it 'language bias'). Instead of presenting a global picture of a complex topic, our online searches turn us into the proverbial blind person touching a small portion of an elephant, ignorant of the existence of other cultural perspectives. The language we use to search ends up as a cultural filter to promote ethnocentric views, where a person evaluates other people or ideas based on their own culture. We also find that language bias is deeply embedded in ChatGPT. As it is primarily trained on English language data, it presents the Anglo-American perspective as the normative view, reducing the complexity of a multifaceted issue to the single Anglo-American standard. In this paper, we present evidence and analysis of language bias and discuss its larger social implications. Toward the end of the paper, we propose a potential framework of using automatic translation to leverage language bias and argue that the task of piecing together a genuine depiction of the elephant is a challenging and important endeavor that deserves a new area of research in NLP and requires collaboration with scholars from the humanities to create ethically sound and socially responsible technology together

    Effect of heuristics on serendipity in path-based storytelling with linked data

    Get PDF
    Path-based storytelling with Linked Data on the Web provides users the ability to discover concepts in an entertaining and educational way. Given a query context, many state-of-the-art pathfinding approaches aim at telling a story that coincides with the user's expectations by investigating paths over Linked Data on the Web. By taking into account serendipity in storytelling, we aim at improving and tailoring existing approaches towards better fitting user expectations so that users are able to discover interesting knowledge without feeling unsure or even lost in the story facts. To this end, we propose to optimize the link estimation between - and the selection of facts in a story by increasing the consistency and relevancy of links between facts through additional domain delineation and refinement steps. In order to address multiple aspects of serendipity, we propose and investigate combinations of weights and heuristics in paths forming the essential building blocks for each story. Our experimental findings with stories based on DBpedia indicate the improvements when applying the optimized algorithm

    A Comparison of Source Distribution and Result Overlap in Web Search Engines

    Full text link
    When it comes to search engines, users generally prefer Google. Our study aims to find the differences between the results found in Google compared to other search engines. We compared the top 10 results from Google, Bing, DuckDuckGo, and Metager, using 3,537 queries generated from Google Trends from Germany and the US. Google displays more unique domains in the top results than its competitors. Wikipedia and news websites are the most popular sources overall. With some top sources dominating search results, the distribution of domains is also consistent across all search engines. The overlap between Google and Bing is always under 32%, while Metager has a higher overlap with Bing than DuckDuckGo, going up to 78%. This study shows that the use of another search engine, especially in addition to Google, provides a wider variety in sources and might lead the user to find new perspectives.Comment: Submitted to the 85th Annual Meeting of the Association for Information Science & Technology and will be published in the conference proceeding
    corecore