780 research outputs found

    Toward Entity-Aware Search

    Get PDF
    As the Web has evolved into a data-rich repository, with the standard "page view," current search engines are becoming increasingly inadequate for a wide range of query tasks. While we often search for various data "entities" (e.g., phone number, paper PDF, date), today's engines only take us indirectly to pages. In my Ph.D. study, we focus on a novel type of Web search that is aware of data entities inside pages, a significant departure from traditional document retrieval. We study the various essential aspects of supporting entity-aware Web search. To begin with, we tackle the core challenge of ranking entities, by distilling its underlying conceptual model Impression Model and developing a probabilistic ranking framework, EntityRank, that is able to seamlessly integrate both local and global information in ranking. We also report a prototype system built to show the initial promise of the proposal. Then, we aim at distilling and abstracting the essential computation requirements of entity search. From the dual views of reasoning--entity as input and entity as output, we propose a dual-inversion framework, with two indexing and partition schemes, towards efficient and scalable query processing. Further, to recognize more entity instances, we study the problem of entity synonym discovery through mining query log data. The results we obtained so far have shown clear promise of entity-aware search, in its usefulness, effectiveness, efficiency and scalability

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Social and Semantic Web Technologies for the Text-To-Knowledge Translation Process in Biomedicine

    Get PDF
    Currently, biomedical research critically depends on knowledge availability for flexible re-analysis and integrative post-processing. The voluminous biological data already stored in databases, put together with the abundant molecular data resulting from the rapid adoption of high-throughput techniques, have shown the potential to generate new biomedical discovery through integration with knowledge from the scientific literature. Reliable information extraction applications have been a long-sought goal of the biomedical text mining community. Both named entity recognition and conceptual analysis are needed in order to map the objects and concepts represented by natural language texts into a rigorous encoding, with direct links to online resources that explicitly expose those concepts semantics (see Figure 1).P08-TIC-4299 of J. ASevilla and TIN2009-13489 of DGICT, Madri

    Synote: weaving media fragments and linked data

    No full text
    While end users could easily share and tag the multimedia resources online, the searching and reusing of the inside content of multimedia, such as a certain area within an image or a ten minutes segment within a one-hour video, is still difficult. Linked data is a promising way to interlink media fragments with other resources. Many applications in Web 2.0 have generated large amount of external annotations linked to media fragments. In this paper, we use Synote as the target application to discuss how media fragments could be published together with external annotations following linked data principles. Our design solves the dereferencing, describing and interlinking methods problems in interlinking multimedia. We also implement a model to let Google index media fragments which improves media fragments' online presence. The evaluation shows that our design can successfully publish media fragments and annotations for both semantic Web agents and traditional search engines. Publishing media fragments using the design we describe in this paper will lead to better indexing of multimedia resources and their consequent findabilit

    Microdata Deduplication with Spark

    Get PDF
    Üha rohkem avaldatakse veebis struktureeritud sisu, mis on loetav nii inimeste kui masinate poolt. Tänu otsimootorite loojatele, kes on defineerinud standardid struktureeritud sisu esitamiseks, teevad järjest rohkemad veebisaidid osa oma andmetest, nt toodete, isikute, organisatsioonide ja asukohtade kirjeldused, veebis avalikuks. Selleks kasutatakse RDFa, microdata jms vorminguid. Microdata on üks viimastest vormingutest ning saanud populaarseks suhteliselt lühikese aja jooksul. Sarnaselt on arenenud tehnoloogiad veebist struktureeritud sisu kättesaamiseks. Näiteks on Apache Any23, mis võimaldab veebilehtedest microdata andmeid eraldada ja linkandmetena kättesaadavaks teha. Samas pole struktureeritud andmete veebist kättesaamine enam suurim tehniline väljakutse. Nimelt on veebist saadud andmeid enne kasutamist vaja puhastada - eemaldada duplikaadid, lahendada ebakõlad ning hakkama tuleb saada ka ebamääraste andmetega.\n\rKäesoleva magistritöö peamiseks fookuseks on efektiivse lahenduse loomine veebis leiduvatest linkandmetest duplikaatide eemaldamine suurte andmekoguste jaoks. Kuigi deduplikeerimise algoritmid on saavutanud suhtelise küpsuse, tuleb neid konkreetsete andmekomplektide jaoks siiski peenhäälestada. Eelkõige tuleb tuvastada sobivaim võtme pikkus kirjete sortimiseks. Käesolevas töös tuvastatakse optimaalne võtme pikkus veebis leiduvate tooteandmete deduplikeerimise kontekstis. Suurte andmemahtude tõttu kasutatakse Apache Spark'i deduplikeerimist hajusalgoritmide realiseerimiseks.The web is transforming from traditional web to web of data, where information is presented in such a way that it is readable by machines as well as human. As a part of this transformation, every day more and more websites implant structured data, e.g. product, person, organization, place etc., into the HTML pages. To implant the structured data different encoding vocabularies, such as RDFa, microdata, and microformats, are used. Microdata is the most recent addition to these structure data embedding standards, but it has gained more popularity over other formats in less time. Similarly, progress has been made in the extraction of the structured data from web pages, which has resulted in open source tools such as Apache Any23 and non-profit Common Crawl project. Any23 allows extraction of microdata from the web pages with less effort, whereas Common Crawl extracts data from websites and provides it publically for download. In fact, the microdata extraction tools only take care of parsing and data transformation steps of data cleansing. Although with the help of these state-of-the-art extraction tools microdata can be easily extracted, before the extracted data used in potential applications, duplicates should be removed and data unified. Since microdata origins from arbitrary web resources, it has arbitrary quality as well and should be treated correspondingly. \n\rThe main purpose of this thesis is to develop the effective mechanism for deduplication of microdata on the web scale. Although the deduplication algorithms have reached relative maturity, however, these algorithm needs to be executed on specific datasets for fine-tuning. In particular, the need to identify the most suitable length of sorting key in sorted-based deduplication approach. The present work identifies the optimum length of the sorting key in the context of extracted product microdata deduplication. Due to large volumes of data to be processed continuously, Apache Spark will be used for implementing the necessary procedures

    An Introduction to a Meta-meta-search Engine

    Get PDF
    Imagine that all the information in the entire world written in every known language, and every graphic image, video clip, or photograph copied digitally was available at your fingertips. This vast amount of data could then be reduced to digital data packets and stored in miniscule form on computer hard drives that are all connected by several other centrally located larger machines, called servers. However, searching for data in a vast system of inter-connected computers is virtually impossible using human faculties and is a far more intricate process than perusing book titles using the library\u27s Dewey Decimal System. In order to find five or six pieces of information out of a global network of servers, individuals can explore the advantages of meta-search engines, which understand the language of each computer on the network and can quickly access global databases to respond to user inquiries, based on certain keywords or phrases. The advantages of meta-search engines are that they are able to talk to other search engines, which contain relevant data. The language that they speak is HTML, or hypertext markup language, a set of electronic codes that enables computers to read, translate, transmit, and store data accessible to the entire world. Every Web page is written in HTML using meta tags, which are directives to client computers describing the kind of document stored. By reading meta tags, search engines are able to electronically skim through vast databases to select data that match a user\u27s inquiry. However, the existing meta-search engines are still facing issues in providing accurate results that match user queries due to the extremely fast growth and the complexity of information that is stored in the Web server. This thesis proposes a new algorithm that will re-rank the Web search results from some of the best existing meta-search engines. This algorithm can be implemented to form a meta-meta-search engine. As a result, the new search engine will have the capability of listing a more reliable rank list with higher accuracy in comparison to the existing search engines and meta-search engines
    corecore