408 research outputs found

    The Present and Future of Internet Search

    Get PDF
    Search engines were crucial in the development of the World Wide Web. Web-based information retrieval progressed from simple word matching to sophisticated algorithms for maximizing the relevance of search results. Statistical and graph-based approaches for indexing and ranking pages, natural language processing techniques for improving query results, and intelligent agents for personalizing the search process all show great promise for enhanced performance. The evolution in search technology was accompanied by growing economic pressures on search engine companies. Unable to sustain long-term viability from advertising revenues, many of the original search engines diversified into portals that farm out their search and directory operations. Vertical portals that serve focused user communities also outsource their search services, and even directory providers began to integrate search engine technologies from outside vendors. This article brings order to the chaos resulting from the variety of search tools being offered under various marketing guises. While growing reliance on a small set of search providers is leading to less diversity among search services, users can expect individualized searching experiences that factor in personal information. The convergence of technology and business models also results in more narrowly defined search spaces, which will lessen the quantity of search results while improving their quality

    An Overlay Architecture for Personalized Object Access and Sharing in a Peer-to-Peer Environment

    Get PDF
    Due to its exponential growth and decentralized nature, the Internet has evolved into a chaotic repository, making it difficult for users to discover and access resources of interest to them. As a result, users have to deal with the problem of information overload. The Semantic Web's emergence provides Internet users with the ability to associate explicit, self-described semantics with resources. This ability will facilitate in turn the development of ontology-based resource discovery tools to help users retrieve information in an efficient manner. However, it is widely believed that the Semantic Web of the future will be a complex web of smaller ontologies, mostly created by various groups of web users who share a similar interest, referred to as a Community of Interest. This thesis proposes a solution to the information overload problem using a user driven framework, referred to as a Personalized Web, that allows individual users to organize themselves into Communities of Interests based on ontologies agreed upon by all community members. Within this framework, users can define and augment their personalized views of the Internet by associating specific properties and attributes to resources and defining constraint-functions and rules that govern the interpretation of the semantics associated with the resources. Such views can then be used to capture the user's interests and integrate these views into a user-defined Personalized Web. As a proof of concept, a Personalized Web architecture that employs ontology-based semantics and a structured Peer-to-Peer overlay network to provide a foundation of semantically-based resource indexing and advertising is developed. In order to investigate mechanisms that support the resource advertising and retrieval of the Personalized Web architecture, three agent-driven advertising and retrieval schemes, the Aggressive scheme, the Crawler-based scheme, and the Minimum-Cover-Rule scheme, were implemented and evaluated in both stable and churn environments. In addition to the development of a Personalized Web architecture that deals with typical web resources, this thesis used a case study to explore the potential of the Personalized Web architecture to support future web service workflow applications. The results of this investigation demonstrated that the architecture can support the automation of service discovery, negotiation, and invocation, allowing service consumers to actualize a personalized web service workflow. Further investigation will be required to improve the performance of the automation and allow it to be performed in a secure and robust manner. In order to support the next generation Internet, further exploration will be needed for the development of a Personalized Web that includes ubiquitous and pervasive resources

    Personalizacija sadržaja novinskih webskih portala pomoću tehnika izlučivanja informacija i težinskih Voronoievih dijagrama

    Get PDF
    News web portals present information, in previously defined topic taxonomy, in both multimedia as well as textual format, that cover all aspects of our daily lives. The information presented has a high refresh rate and as such offers a local as well as a global snapshot of the world. This thesis deals with the presentation of information extraction techniques (from web news portals) and their use in standardization of categorization schemes and automatic classification of newly published content. As the personalization method, weighted Voronoi diagrams are proposed. The aim of the study is to create a virtual profile based on the semantic value of information of visited nodes (web pages formatted with HTML language) at the individual level. The results can greatly contribute to the applicability of the personalization data to specific information sources, including various web news portals. Also, by creating a publicly available collection of prepared data future research in this domain is enabled. Scientific contribution of this doctoral thesis is therefore: a universal classification scheme, that is based on the ODP taxonomy data, is developed, a way for information extraction about user preferences, based on the analysis of user behavior data when using the Web browser, is defined, personalization system, based on the weighted Voronoi diagrams, is implemented.Jedan od načina rjeÅ”avanja problema nastalih hiperprodukcijom informacija je putem personalizacije izvora informacija, u naÅ”em slučaju WWW okruženja, kreiranjem virtualnih profila temeljenih na analizi ponaÅ”ajnih karakteristika korisnika s ciljem gradiranja važnosti informacija na individualnoj bazi. Sama personalizacija je najviÅ”e koriÅ”tena u području pretraživanja informacija. U pregledu dosadaÅ”njih istraživanja valja napomenuti nekoliko različitih pristupa koji su koriÅ”teni u personalizaciji dostupnog sadržaja: ontologijski pristupi, kontekstualni modeli, rudarenje podataka. Ti pristupi su najzastupljeniji u pregledanoj literaturi. Analizom literature također je uočen problem nedostatka ujednačene taksonomije pojmova koji se koriste za anotaciju informacijskih čvorova. Prevladavajući pristup anotacijije koriÅ”tenje sustava označavanja koji se temelji na korisničkom unosu. Pregledani radovi ukazuju da korisnici na različitim sustavima vežu iste anotacije za iste i/ili slične objekte kod popularnih anotacija, da problem sinonima postoji ali da je zanemariv uz dovoljnu količinu podataka te da se anotacije koriÅ”tene od strane običnih korisnika i stručnjaka domene preklapaju u 52% slučajeva. Ti podaci upućuju na problem nedostatka unificiranog sustava označavanja informacijskog čvora. Sustavi označavanja nose sa sobom veliku količinu "informacijskog Å”uma" zbog individualne prirode označavanja informacijskog čvora koji je izravno vezan za korisnikovo poznavanje domene informacijskog čvora. Kao potencijalno rjeÅ”enje ovog uočenog nedostatka predlaže se koriÅ”tenje postojećih taksonomija definiranih putem web direktorija. Pregled literature, od nekoliko mogućih web direktorija, najviÅ”e spominje ODP web direktorij kao najkvalitetniju taksonomiju hijerarhijske domenske kategorizacije informacijskih čvorova. KoriÅ”tenje ODP kao taksonomije je navedeno unekoliko radova proučenih u sklopu obavljenog predistraživanja. KoriÅ”tenjem ODP taksonomije za klasifikaciju informacijskih čvorova omogućuje se određivanje domenske pripadnosti. Ta činjenica omogućuje dodjelu vrijednosti pripadnosti informacijskog čvora pojedinoj domeni. S obzirom na kompleksnu strukturu ODP taksonomije (12 hijerarhijskih razina podjele, 17 kategorija na prvoj razini) i velikom broju potencijalnih kategorija, predlaže koriÅ”tenje ODP taksonomije za klasifikaciju informacijskog čvora do razine 6. Uz uputu o broju hijerarhijskih razina koje se preporučuju za koriÅ”tenje prilikom analize ODP strukture, također ističe potrebu za dubinskom klasifikacijom dokumenata. Analizom literature primijećeno je da se problemu personalizacije pristupa prvenstveno u domeni pretraživanja informacija putem WWW sučelja te da je personalizacija informacija dostupnih putem web portala slabo istražena. Kroz brojne radove koji su konzultirani prilikom pripreme predistraživačke faze kao izvori podataka za analizu iskoriÅ”teni su različiti izvori informacija: serverske log datoteke, osobna povijest pregledavanja putem preglednikovih log datoteka, aplikacije za praćenje korisnikove interakcije sa sustavom , kolačići i drugi. Podaci prikupljeni putem jednog ili viÅ”e gore navedenih izvora daju nam uvid u individualno kretanje korisnika unutar definiranog informacijskog i vremenskog okvira. U pregledanoj literaturi se tako prikupljeni podaci koriste za personalizaciju informacija no ne na individualnoj razini nego na temelju grupiranja korisnika u tematski slične grupe/cjeline. Cilj ovog rada je testirati postojeće metode, koje su prepoznate od koristi za daljnji rad, te unapređenje tih metoda težinskim Voronoi dijagramima radi ostvarivanja personalizacije na individualnoj razini. KoriÅ”tenje težinskih Voronoi dijagrama do sada nije zabilježen u literaturi pa samim time predstavlja inovaciju na području personalizacije informacija. Od pomoći će u tom procesu biti i radovi koji se temeljno bave prepoznavanjem uzoraka koriÅ”tenja informacijskih čvorova, kojih ima značajan broj te se ne mogu svi spomenuti. Postojanje ponaÅ”ajnog uzorka povezanog bilo s dugoročnim i/ili kratkoročnim podacima o korisnikovu kretanju kroz informacijski prostor omogućuje kvalitetnije filtriranje i personalizaciju dostupnih informacija. S obzirom da je cilj ovog rada prikazati mogućnost individualne personalizacije, prepoznat je potencijal koriÅ”tenja težinskih Voronoi dijagrama za potrebe izgradnje virtualnog semantičkog profila te personalizaciju informacija

    A Survey on Important Aspects of Information Retrieval

    Get PDF
    Information retrieval has become an important field of study and research under computer science due to the explosive growth of information available in the form of full text, hypertext, administrative text, directory, numeric or bibliographic text. The research work is going on various aspects of information retrieval systems so as to improve its efficiency and reliability. This paper presents a comprehensive survey discussing not only the emergence and evolution of information retrieval but also include different information retrieval models and some important aspects such as document representation, similarity measure and query expansion

    Personalizacija sadržaja novinskih webskih portala pomoću tehnika izlučivanja informacija i težinskih Voronoievih dijagrama

    Get PDF
    News web portals present information, in previously defined topic taxonomy, in both multimedia as well as textual format, that cover all aspects of our daily lives. The information presented has a high refresh rate and as such offers a local as well as a global snapshot of the world. This thesis deals with the presentation of information extraction techniques (from web news portals) and their use in standardization of categorization schemes and automatic classification of newly published content. As the personalization method, weighted Voronoi diagrams are proposed. The aim of the study is to create a virtual profile based on the semantic value of information of visited nodes (web pages formatted with HTML language) at the individual level. The results can greatly contribute to the applicability of the personalization data to specific information sources, including various web news portals. Also, by creating a publicly available collection of prepared data future research in this domain is enabled. Scientific contribution of this doctoral thesis is therefore: a universal classification scheme, that is based on the ODP taxonomy data, is developed, a way for information extraction about user preferences, based on the analysis of user behavior data when using the Web browser, is defined, personalization system, based on the weighted Voronoi diagrams, is implemented.Jedan od načina rjeÅ”avanja problema nastalih hiperprodukcijom informacija je putem personalizacije izvora informacija, u naÅ”em slučaju WWW okruženja, kreiranjem virtualnih profila temeljenih na analizi ponaÅ”ajnih karakteristika korisnika s ciljem gradiranja važnosti informacija na individualnoj bazi. Sama personalizacija je najviÅ”e koriÅ”tena u području pretraživanja informacija. U pregledu dosadaÅ”njih istraživanja valja napomenuti nekoliko različitih pristupa koji su koriÅ”teni u personalizaciji dostupnog sadržaja: ontologijski pristupi, kontekstualni modeli, rudarenje podataka. Ti pristupi su najzastupljeniji u pregledanoj literaturi. Analizom literature također je uočen problem nedostatka ujednačene taksonomije pojmova koji se koriste za anotaciju informacijskih čvorova. Prevladavajući pristup anotacijije koriÅ”tenje sustava označavanja koji se temelji na korisničkom unosu. Pregledani radovi ukazuju da korisnici na različitim sustavima vežu iste anotacije za iste i/ili slične objekte kod popularnih anotacija, da problem sinonima postoji ali da je zanemariv uz dovoljnu količinu podataka te da se anotacije koriÅ”tene od strane običnih korisnika i stručnjaka domene preklapaju u 52% slučajeva. Ti podaci upućuju na problem nedostatka unificiranog sustava označavanja informacijskog čvora. Sustavi označavanja nose sa sobom veliku količinu "informacijskog Å”uma" zbog individualne prirode označavanja informacijskog čvora koji je izravno vezan za korisnikovo poznavanje domene informacijskog čvora. Kao potencijalno rjeÅ”enje ovog uočenog nedostatka predlaže se koriÅ”tenje postojećih taksonomija definiranih putem web direktorija. Pregled literature, od nekoliko mogućih web direktorija, najviÅ”e spominje ODP web direktorij kao najkvalitetniju taksonomiju hijerarhijske domenske kategorizacije informacijskih čvorova. KoriÅ”tenje ODP kao taksonomije je navedeno unekoliko radova proučenih u sklopu obavljenog predistraživanja. KoriÅ”tenjem ODP taksonomije za klasifikaciju informacijskih čvorova omogućuje se određivanje domenske pripadnosti. Ta činjenica omogućuje dodjelu vrijednosti pripadnosti informacijskog čvora pojedinoj domeni. S obzirom na kompleksnu strukturu ODP taksonomije (12 hijerarhijskih razina podjele, 17 kategorija na prvoj razini) i velikom broju potencijalnih kategorija, predlaže koriÅ”tenje ODP taksonomije za klasifikaciju informacijskog čvora do razine 6. Uz uputu o broju hijerarhijskih razina koje se preporučuju za koriÅ”tenje prilikom analize ODP strukture, također ističe potrebu za dubinskom klasifikacijom dokumenata. Analizom literature primijećeno je da se problemu personalizacije pristupa prvenstveno u domeni pretraživanja informacija putem WWW sučelja te da je personalizacija informacija dostupnih putem web portala slabo istražena. Kroz brojne radove koji su konzultirani prilikom pripreme predistraživačke faze kao izvori podataka za analizu iskoriÅ”teni su različiti izvori informacija: serverske log datoteke, osobna povijest pregledavanja putem preglednikovih log datoteka, aplikacije za praćenje korisnikove interakcije sa sustavom , kolačići i drugi. Podaci prikupljeni putem jednog ili viÅ”e gore navedenih izvora daju nam uvid u individualno kretanje korisnika unutar definiranog informacijskog i vremenskog okvira. U pregledanoj literaturi se tako prikupljeni podaci koriste za personalizaciju informacija no ne na individualnoj razini nego na temelju grupiranja korisnika u tematski slične grupe/cjeline. Cilj ovog rada je testirati postojeće metode, koje su prepoznate od koristi za daljnji rad, te unapređenje tih metoda težinskim Voronoi dijagramima radi ostvarivanja personalizacije na individualnoj razini. KoriÅ”tenje težinskih Voronoi dijagrama do sada nije zabilježen u literaturi pa samim time predstavlja inovaciju na području personalizacije informacija. Od pomoći će u tom procesu biti i radovi koji se temeljno bave prepoznavanjem uzoraka koriÅ”tenja informacijskih čvorova, kojih ima značajan broj te se ne mogu svi spomenuti. Postojanje ponaÅ”ajnog uzorka povezanog bilo s dugoročnim i/ili kratkoročnim podacima o korisnikovu kretanju kroz informacijski prostor omogućuje kvalitetnije filtriranje i personalizaciju dostupnih informacija. S obzirom da je cilj ovog rada prikazati mogućnost individualne personalizacije, prepoznat je potencijal koriÅ”tenja težinskih Voronoi dijagrama za potrebe izgradnje virtualnog semantičkog profila te personalizaciju informacija

    Multi-Agent User-Centric Specialization and Collaboration for Information Retrieval

    Get PDF
    The amount of information on the World Wide Web (WWW) is rapidly growing in pace and topic diversity. This has made it increasingly difficult, and often frustrating, for information seekers to retrieve the content they are looking for as information retrieval systems (e.g., search engines) are unable to decipher the relevance of the retrieved information as it pertains to the information they are searching for. This issue can be decomposed into two aspects: 1) variability of information relevance as it pertains to an information seeker. In other words, different information seekers may enter the same search text, or keywords, but expect completely different results. It is therefore, imperative that information retrieval systems possess an ability to incorporate a model of the information seeker in order to estimate the relevance and context of use of information before presenting results. Of course, in this context, by a model we mean the capture of trends in the information seeker's search behaviour. This is what many researchers refer to as the personalized search. 2) Information diversity. Information available on the World Wide Web today spans multitudes of inherently overlapping topics, and it is difficult for any information retrieval system to decide effectively on the relevance of the information retrieved in response to an information seeker's query. For example, the information seeker who wishes to use WWW to learn about a cure for a certain illness would receive a more relevant answer if the search engine was optimized into such domains of topics. This is what is being referred to in the WWW nomenclature as a 'specialized search'. This thesis maintains that the information seeker's search is not intended to be completely random and therefore tends to portray itself as consistent patterns of behaviour. Nonetheless, this behaviour, despite being consistent, can be quite complex to capture. To accomplish this goal the thesis proposes a Multi-Agent Personalized Information Retrieval with Specialization Ontology (MAPIRSO). MAPIRSO offers a complete learning framework that is able to model the end user's search behaviour and interests and to organize information into categorized domains so as to ensure maximum relevance of its responses as they pertain to the end user queries. Specialization and personalization are accomplished using a group of collaborative agents. Each agent employs a Reinforcement Learning (RL) strategy to capture end user's behaviour and interests. Reinforcement learning allows the agents to evolve their knowledge of the end user behaviour and interests as they function to serve him or her. Furthermore, REL allows each agent to adapt to changes in an end user's behaviour and interests. Specialization is the process by which new information domains are created based on existing information topics, allowing new kinds of content to be built exclusively for information seekers. One of the key characteristics of specialization domains is the seeker centric - which allows intelligent agents to create new information based on the information seekers' feedback and their behaviours. Specialized domains are created by intelligent agents that collect information from a specific domain topic. The task of these specialized agents is to map the user's query to a repository of specific domains in order to present users with relevant information. As a result, mapping users' queries to only relevant information is one of the fundamental challenges in Artificial Intelligent (AI) and machine learning research. Our approach employs intelligent cooperative agents that specialize in building personalized ontology information domains that pertain to each information seeker's specific needs. Specializing and categorizing information into unique domains is one of the challenge areas that have been addressed and various proposed solutions were evaluated and adopted to address growing information. However, categorizing information into unique domains does not satisfy each individualized information seeker. Information seekers might search for similar topics, but each would have different interests. For example, medical information of a specific medical domain has different importance to both the doctor and patients. The thesis presents a novel solution that will resolve the growing and diverse information by building seeker centric specialized information domains that are personalized through the information seekers' feedback and behaviours. To address this challenge, the research examines the fundamental components that constitute the specialized agent: an intelligent machine learning system, user input queries, an intelligent agent, and information resources constructed through specialized domains. Experimental work is reported to demonstrate the efficiency of the proposed solution in addressing the overlapping information growth. The experimental work utilizes extensive user-centric specialized domain topics. This work employs personalized and collaborative multi learning agents and ontology techniques thereby enriching the queries and domains of the user. Therefore, experiments and results have shown that building specialized ontology domains, pertinent to the information seekers' needs, are more precise and efficient compared to other information retrieval applications and existing search engines

    Professional Search in Pharmaceutical Research

    Get PDF
    In the mid 90s, visiting libraries ā€“ as means of retrieving the latest literature ā€“ was still a common necessity among professionals. Nowadays, professionals simply access information by ā€˜googlingā€™. Indeed, the name of the Web search engine market leader ā€œGoogleā€ became a synonym for searching and retrieving information. Despite the increased popularity of search as a method for retrieving relevant information, at the workplace search engines still do not deliver satisfying results to professionals. Search engines for instance ignore that the relevance of answers (the satisfaction of a searcherā€™s needs) depends not only on the query (the information request) and the document corpus, but also on the working context (the userā€™s personal needs, education, etc.). In effect, an answer which might be appropriate to one user might not be appropriate to the other user, even though the query and the document corpus are the same for both. Personalization services addressing the context become therefore more and more popular and are an active field of research. This is only one of several challenges encountered in ā€˜professional searchā€™: How can the working context of the searcher be incorporated in the ranking process; how can unstructured free-text documents be enriched with semantic information so that the information need can be expressed precisely at query time; how and to which extent can a companyā€™s knowledge be exploited for search purposes; how should data from distributed sources be accessed from into one-single-entry-point. This thesis is devoted to ā€˜professional searchā€™, i.e. search at the workplace, especially in industrial research and development. We contribute by compiling and developing several approaches for facing the challenges mentioned above. The approaches are implemented into the prototype YASA (Your Adaptive Search Agent) which provides meta-search, adaptive ranking of search results, guided navigation, and which uses domain knowledge to drive the search processes. YASA is deployed in the pharmaceutical research department of Roche in Penzberg ā€“ a major pharmaceutical company ā€“ in which the applied methods were empirically evaluated. Being confronted with mostly unstructured free-text documents and having barely explicit metadata at hand, we faced a serious challenge. Incorporating semantics (i.e. formal knowledge representation) into the search process can only be as good as the underlying data. Nonetheless, we are able to demonstrate that this issue can be largely compensated by incorporating automatic metadata extraction techniques. The metadata we were able to extract automatically was not perfectly accurate, nor did the ontology we applied contain considerably ā€œrich semanticsā€. Nonetheless, our results show that already the little semantics incorporated into the search process, suffices to achieve a significant improvement in search and retrieval. We thus contribute to the research field of context-based search by incorporating the working context into the search process ā€“ an area which so far has not yet been well studied

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm usersā€™ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ā€˜unannotatedā€™ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ā€˜Bag of Visual Wordsā€™ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ā€˜non-informative visual wordsā€™ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation
    • ā€¦
    corecore