3,360 research outputs found

    Towards improving web service repositories through semantic web techniques

    Get PDF
    The success of the Web services technology has brought topicsas software reuse and discovery once again on the agenda of software engineers. While there are several efforts towards automating Web service discovery and composition, many developers still search for services via online Web service repositories and then combine them manually. However, from our analysis of these repositories, it yields that, unlike traditional software libraries, they rely on little metadata to support service discovery. We believe that the major cause is the difficulty of automatically deriving metadata that would describe rapidly changing Web service collections. In this paper, we discuss the major shortcomings of state of the art Web service repositories and, as a solution, we report on ongoing work and ideas on how to use techniques developed in the context of the Semantic Web (ontology learning, mapping, metadata based presentation) to improve the current situation

    A Large Visual, Qualitative, and Quantitative Dataset for Web Intelligence Applications

    Get PDF
    The Web is the communication platform and source of information par excellence. The volume and complexity of its content have grown enormously, with organizing, retrieving, and cleaning Web information becoming a challenge for traditional techniques. Web intelligence is a novel research area to improve Web-based services and applications using artificial intelligence and automatic learning algorithms, for which a large amount of Web-related data are essential. Current datasets are, however, limited and do not combine visual representation and attributes of Web pages. Our work provides a large dataset of 49,438 Web pages, composed of webshots, along with qualitative and quantitative attributes. This dataset covers all the countries in the world and a wide range of topics, such as art, entertainment, economics, business, education, government, news, media, science, and the environment, addressing different cultural characteristics and varied design preferences. We use this dataset to develop three Web Intelligence applications: knowledge extraction on Web design using statistical analysis, recognition of error Web pages using a customized convolutional neural network (CNN) to eliminate invalid pages, and Web categorization based solely on screenshots using a CNN with transfer learning to assist search engines, indexers, and Web directories.This work has been funded by the grant awarded by the Central University of Ecuador through budget certification No. 34 of March 25, 2022 for the development of the research project with code: DOCT-DI-2020-37

    Extracting corpus specific knowledge bases from Wikipedia

    Get PDF
    Thesauri are useful knowledge structures for assisting information retrieval. Yet their production is labor-intensive, and few domains have comprehensive thesauri that cover domain-specific concepts and contemporary usage. One approach, which has been attempted without much success for decades, is to seek statistical natural language processing algorithms that work on free text. Instead, we propose to replace costly professional indexers with thousands of dedicated amateur volunteers--namely, those that are producing Wikipedia. This vast, open encyclopedia represents a rich tapestry of topics and semantics and a huge investment of human effort and judgment. We show how this can be directly exploited to provide WikiSauri: manually-defined yet inexpensive thesaurus structures that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We also offer concrete evidence of the effectiveness of WikiSauri for assisting information retrieval

    The best of both worlds: highlighting the synergies of combining manual and automatic knowledge organization methods to improve information search and discovery.

    Get PDF
    Research suggests organizations across all sectors waste a significant amount of time looking for information and often fail to leverage the information they have. In response, many organizations have deployed some form of enterprise search to improve the 'findability' of information. Debates persist as to whether thesauri and manual indexing or automated machine learning techniques should be used to enhance discovery of information. In addition, the extent to which a knowledge organization system (KOS) enhances discoveries or indeed blinds us to new ones remains a moot point. The oil and gas industry was used as a case study using a representative organization. Drawing on prior research, a theoretical model is presented which aims to overcome the shortcomings of each approach. This synergistic model could help to re-conceptualize the 'manual' versus 'automatic' debate in many enterprises, accommodating a broader range of information needs. This may enable enterprises to develop more effective information and knowledge management strategies and ease the tension between what arc often perceived as mutually exclusive competing approaches. Certain aspects of the theoretical model may be transferable to other industries, which is an area for further research

    Towards memory supporting personal information management tools

    Get PDF
    In this article we discuss re-retrieving personal information objects and relate the task to recovering from lapse(s) in memory. We propose that fundamentally it is lapses in memory that impede users from successfully re-finding the information they need. Our hypothesis is that by learning more about memory lapses in non-computing contexts and how people cope and recover from these lapses, we can better inform the design of PIM tools and improve the user's ability to re-access and re-use objects. We describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, we present a series of principles that we hypothesize will improve the design of personal information management tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to our findings. The evaluation suggests that users' performance when re-finding objects can be improved by building personal information management tools to support characteristics of human memory

    Personalizacija sadržaja novinskih webskih portala pomoću tehnika izlučivanja informacija i težinskih Voronoievih dijagrama

    Get PDF
    News web portals present information, in previously defined topic taxonomy, in both multimedia as well as textual format, that cover all aspects of our daily lives. The information presented has a high refresh rate and as such offers a local as well as a global snapshot of the world. This thesis deals with the presentation of information extraction techniques (from web news portals) and their use in standardization of categorization schemes and automatic classification of newly published content. As the personalization method, weighted Voronoi diagrams are proposed. The aim of the study is to create a virtual profile based on the semantic value of information of visited nodes (web pages formatted with HTML language) at the individual level. The results can greatly contribute to the applicability of the personalization data to specific information sources, including various web news portals. Also, by creating a publicly available collection of prepared data future research in this domain is enabled. Scientific contribution of this doctoral thesis is therefore: a universal classification scheme, that is based on the ODP taxonomy data, is developed, a way for information extraction about user preferences, based on the analysis of user behavior data when using the Web browser, is defined, personalization system, based on the weighted Voronoi diagrams, is implemented.Jedan od načina rješavanja problema nastalih hiperprodukcijom informacija je putem personalizacije izvora informacija, u našem slučaju WWW okruženja, kreiranjem virtualnih profila temeljenih na analizi ponašajnih karakteristika korisnika s ciljem gradiranja važnosti informacija na individualnoj bazi. Sama personalizacija je najviše korištena u području pretraživanja informacija. U pregledu dosadašnjih istraživanja valja napomenuti nekoliko različitih pristupa koji su korišteni u personalizaciji dostupnog sadržaja: ontologijski pristupi, kontekstualni modeli, rudarenje podataka. Ti pristupi su najzastupljeniji u pregledanoj literaturi. Analizom literature također je uočen problem nedostatka ujednačene taksonomije pojmova koji se koriste za anotaciju informacijskih čvorova. Prevladavajući pristup anotacijije korištenje sustava označavanja koji se temelji na korisničkom unosu. Pregledani radovi ukazuju da korisnici na različitim sustavima vežu iste anotacije za iste i/ili slične objekte kod popularnih anotacija, da problem sinonima postoji ali da je zanemariv uz dovoljnu količinu podataka te da se anotacije korištene od strane običnih korisnika i stručnjaka domene preklapaju u 52% slučajeva. Ti podaci upućuju na problem nedostatka unificiranog sustava označavanja informacijskog čvora. Sustavi označavanja nose sa sobom veliku količinu "informacijskog šuma" zbog individualne prirode označavanja informacijskog čvora koji je izravno vezan za korisnikovo poznavanje domene informacijskog čvora. Kao potencijalno rješenje ovog uočenog nedostatka predlaže se korištenje postojećih taksonomija definiranih putem web direktorija. Pregled literature, od nekoliko mogućih web direktorija, najviše spominje ODP web direktorij kao najkvalitetniju taksonomiju hijerarhijske domenske kategorizacije informacijskih čvorova. Korištenje ODP kao taksonomije je navedeno unekoliko radova proučenih u sklopu obavljenog predistraživanja. Korištenjem ODP taksonomije za klasifikaciju informacijskih čvorova omogućuje se određivanje domenske pripadnosti. Ta činjenica omogućuje dodjelu vrijednosti pripadnosti informacijskog čvora pojedinoj domeni. S obzirom na kompleksnu strukturu ODP taksonomije (12 hijerarhijskih razina podjele, 17 kategorija na prvoj razini) i velikom broju potencijalnih kategorija, predlaže korištenje ODP taksonomije za klasifikaciju informacijskog čvora do razine 6. Uz uputu o broju hijerarhijskih razina koje se preporučuju za korištenje prilikom analize ODP strukture, također ističe potrebu za dubinskom klasifikacijom dokumenata. Analizom literature primijećeno je da se problemu personalizacije pristupa prvenstveno u domeni pretraživanja informacija putem WWW sučelja te da je personalizacija informacija dostupnih putem web portala slabo istražena. Kroz brojne radove koji su konzultirani prilikom pripreme predistraživačke faze kao izvori podataka za analizu iskorišteni su različiti izvori informacija: serverske log datoteke, osobna povijest pregledavanja putem preglednikovih log datoteka, aplikacije za praćenje korisnikove interakcije sa sustavom , kolačići i drugi. Podaci prikupljeni putem jednog ili više gore navedenih izvora daju nam uvid u individualno kretanje korisnika unutar definiranog informacijskog i vremenskog okvira. U pregledanoj literaturi se tako prikupljeni podaci koriste za personalizaciju informacija no ne na individualnoj razini nego na temelju grupiranja korisnika u tematski slične grupe/cjeline. Cilj ovog rada je testirati postojeće metode, koje su prepoznate od koristi za daljnji rad, te unapređenje tih metoda težinskim Voronoi dijagramima radi ostvarivanja personalizacije na individualnoj razini. Korištenje težinskih Voronoi dijagrama do sada nije zabilježen u literaturi pa samim time predstavlja inovaciju na području personalizacije informacija. Od pomoći će u tom procesu biti i radovi koji se temeljno bave prepoznavanjem uzoraka korištenja informacijskih čvorova, kojih ima značajan broj te se ne mogu svi spomenuti. Postojanje ponašajnog uzorka povezanog bilo s dugoročnim i/ili kratkoročnim podacima o korisnikovu kretanju kroz informacijski prostor omogućuje kvalitetnije filtriranje i personalizaciju dostupnih informacija. S obzirom da je cilj ovog rada prikazati mogućnost individualne personalizacije, prepoznat je potencijal korištenja težinskih Voronoi dijagrama za potrebe izgradnje virtualnog semantičkog profila te personalizaciju informacija
    corecore