1,133 research outputs found

    Term-Specific Eigenvector-Centrality in Multi-Relation Networks

    Get PDF
    Fuzzy matching and ranking are two information retrieval techniques widely used in web search. Their application to structured data, however, remains an open problem. This article investigates how eigenvector-centrality can be used for approximate matching in multi-relation graphs, that is, graphs where connections of many different types may exist. Based on an extension of the PageRank matrix, eigenvectors representing the distribution of a term after propagating term weights between related data items are computed. The result is an index which takes the document structure into account and can be used with standard document retrieval techniques. As the scheme takes the shape of an index transformation, all necessary calculations are performed during index tim

    Conception of an E-learning scheme at the University of Algarve

    Get PDF
    With the proliferation of the Internet use, a growth of e-learning courses has been verified. We arrived to the moment where it is not enough for Universities to have standard courses to offer to the students, because there is an increasing population which tends to choose his formation according to their objectives, styles, needs and learning preferences (the student profile). This way, the universities are faced with a new challenge, which is to offer, together with the standard courses, modules specially tailored to the user desires, based on the identification of the customers needs. In this paper, a model for the distance formation through Internet is discussed, that is being developed in the University of Algarve, which makes possible each individual to learn in agreement with his/her profile

    Graph Summarization

    Full text link
    The continuous and rapid growth of highly interconnected datasets, which are both voluminous and complex, calls for the development of adequate processing and analytical techniques. One method for condensing and simplifying such datasets is graph summarization. It denotes a series of application-specific algorithms designed to transform graphs into more compact representations while preserving structural patterns, query answers, or specific property distributions. As this problem is common to several areas studying graph topologies, different approaches, such as clustering, compression, sampling, or influence detection, have been proposed, primarily based on statistical and optimization methods. The focus of our chapter is to pinpoint the main graph summarization methods, but especially to focus on the most recent approaches and novel research trends on this topic, not yet covered by previous surveys.Comment: To appear in the Encyclopedia of Big Data Technologie

    Semantic Web Personalization: A Survey

    Get PDF
    With millions of pages available on web, it has become difficult to access relevant information. One possible approach to solve this problem is web personalization. Web personalization is defined as any action that customizes the information or services provided by a web site to an individual. When personalization is applied to the semantic web it offers many advantages when compared to the traditional web because semantic web integrates semantics with the unstructured data on web so that intelligent techniques can be applied to get more efficient results. We have presented various approaches that are used for personalization in semantic web in this paper. The core of semantic web is the ontologies which are defined as explicit formalization of a shared understanding of a conceptualization. We exploit the machine understandable feature of semantic web to device strategies that perform effective personalization such that the results returned to the user are more relevant to the goal set by him. In this paper we have presented the classification of personalization techniques used for semantic web. Keywords: semantic web,ontologies,personalization,recommendation,user profile

    Collaborative recommendations with content-based filters for cultural activities via a scalable event distribution platform

    Get PDF
    Nowadays, most people have limited leisure time and the offer of (cultural) activities to spend this time is enormous. Consequently, picking the most appropriate events becomes increasingly difficult for end-users. This complexity of choice reinforces the necessity of filtering systems that assist users in finding and selecting relevant events. Whereas traditional filtering tools enable e.g. the use of keyword-based or filtered searches, innovative recommender systems draw on user ratings, preferences, and metadata describing the events. Existing collaborative recommendation techniques, developed for suggesting web-shop products or audio-visual content, have difficulties with sparse rating data and can not cope at all with event-specific restrictions like availability, time, and location. Moreover, aggregating, enriching, and distributing these events are additional requisites for an optimal communication channel. In this paper, we propose a highly-scalable event recommendation platform which considers event-specific characteristics. Personal suggestions are generated by an advanced collaborative filtering algorithm, which is more robust on sparse data by extending user profiles with presumable future consumptions. The events, which are described using an RDF/OWL representation of the EventsML-G2 standard, are categorized and enriched via smart indexing and open linked data sets. This metadata model enables additional content-based filters, which consider event-specific characteristics, on the recommendation list. The integration of these different functionalities is realized by a scalable and extendable bus architecture. Finally, focus group conversations were organized with external experts, cultural mediators, and potential end-users to evaluate the event distribution platform and investigate the possible added value of recommendations for cultural participation

    User Preference Web Search -- Experiments with a System Connecting Web and User

    Get PDF
    We present models, methods, implementations and experiments with a system enabling personalized web search for many users with different preferences. The system consists of a web information extraction part, a text search engine, a middleware supporting top-k answers and a user interface for querying and evaluation of search results. We integrate several tools (implementing our models and methods) into one framework connecting user with the web. The model represents user preferences with fuzzy sets and fuzzy logic, here understood as a scoring describing user satisfaction. This model can be acquired with explicit or implicit methods. Model-theoretic semantics is based on fuzzy description logic f-EL. User preference learning is based on our model of fuzzy inductive logic programming. Our system works both for English and Slovak resources. The primary application domain are job offers and job search, however we show extension to mutual investment funds search and a possibility of extension into other application domains. Our top-k search is optimized with own heuristics and repository with special indexes. Our model was experimentally implemented, the integration was tested and is web accessible. We focus on experiments with several users and measure their satisfaction according to correlation coefficients

    Ontology-based Classification and Analysis of non- emergency Smart-city Events

    Full text link
    Several challenges are faced by citizens of urban centers while dealing with day-to-day events, and the absence of a centralised reporting mechanism makes event-reporting and redressal a daunting task. With the push on information technology to adapt to the needs of smart-cities and integrate urban civic services, the use of Open311 architecture presents an interesting solution. In this paper, we present a novel approach that uses an existing Open311 ontology to classify and report non-emergency city-events, as well as to guide the citizen to the points of redressal. The use of linked open data and the semantic model serves to provide contextual meaning and make vast amounts of content hyper-connected and easily-searchable. Such a one-size-fits-all model also ensures reusability and effective visualisation and analysis of data across several cities. By integrating urban services across various civic bodies, the proposed approach provides a single endpoint to the citizen, which is imperative for smooth functioning of smart cities

    Enhanced Living Environments

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1303 “Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)”. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area

    A finder and representation system for knowledge carriers based on granular computing

    Get PDF
    In one of his publications Aristotle states ”All human beings by their nature desire to know” [Kraut 1991]. This desire is initiated the day we are born and accompanies us for the rest of our life. While at a young age our parents serve as one of the principle sources for knowledge, this changes over the course of time. Technological advances and particularly the introduction of the Internet, have given us new possibilities to share and access knowledge from almost anywhere at any given time. Being able to access and share large collections of written down knowledge is only one part of the equation. Just as important is the internalization of it, which in many cases can prove to be difficult to accomplish. Hence, being able to request assistance from someone who holds the necessary knowledge is of great importance, as it can positively stimulate the internalization procedure. However, digitalization does not only provide a larger pool of knowledge sources to choose from but also more people that can be potentially activated, in a bid to receive personalized assistance with a given problem statement or question. While this is beneficial, it imposes the issue that it is hard to keep track of who knows what. For this task so-called Expert Finder Systems have been introduced, which are designed to identify and suggest the most suited candidates to provide assistance. Throughout this Ph.D. thesis a novel type of Expert Finder System will be introduced that is capable of capturing the knowledge users within a community hold, from explicit and implicit data sources. This is accomplished with the use of granular computing, natural language processing and a set of metrics that have been introduced to measure and compare the suitability of candidates. Furthermore, are the knowledge requirements of a problem statement or question being assessed, in order to ensure that only the most suited candidates are being recommended to provide assistance
    corecore