76,109 research outputs found

    A Web Smart Space Framework for Intelligent Search Engines

    Get PDF
    A web smart space is an intelligent environment which has additional capability of searching the information smartly and efficiently. New advancements like dynamic web contents generation has increased the size of web repositories. Among so many modern software analysis requirements, one is to search information from the given repository. But useful information extraction is a troublesome hitch due to the multi-lingual; base of the web data collection. The issue of semantic based information searching has become a standoff due to the inconsistencies and variations in the characteristics of the data. In the accomplished research, a web smart space framework has been proposed which introduces front end processing for a search engine to make the information retrieval process more intelligent and accurate. In orthodox searching anatomies, searching is performed only by using pattern matching technique and consequently a large number of irrelevant results are generated. The projected framework has insightful ability to improve this drawback and returns efficient outcomes. Designed framework gets text input from the user in the form complete question, understands the input and generates the meanings. Search engine searches on the basis of the information provided

    A Multi-Agent Framework for Web Based Information Retrieval and Filtering

    Get PDF
    Searching for information on the Web is a time consuming task. To help users and to speed up searching for relevant documents efficient retrieval and filtering techniques are needed. To increase the efficiency of information retrieval and filtering tasks, intelligent agents have been widely studied and deployed. In this paper, we present a general agent framework for retrieving and filtering relevant/irrelevant documents

    Binary Particle Swarm Optimization based Biclustering of Web usage Data

    Full text link
    Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms

    Investigating the use of semantic technologies in spatial mapping applications

    Get PDF
    Semantic Web Technologies are ideally suited to build context-aware information retrieval applications. However, the geospatial aspect of context awareness presents unique challenges such as the semantic modelling of geographical references for efficient handling of spatial queries, the reconciliation of the heterogeneity at the semantic and geo-representation levels, maintaining the quality of service and scalability of communicating, and the efficient rendering of the spatial queries' results. In this paper, we describe the modelling decisions taken to solve these challenges by analysing our implementation of an intelligent planning and recommendation tool that provides location-aware advice for a specific application domain. This paper contributes to the methodology of integrating heterogeneous geo-referenced data into semantic knowledgebases, and also proposes mechanisms for efficient spatial interrogation of the semantic knowledgebase and optimising the rendering of the dynamically retrieved context-relevant information on a web frontend

    SEMANTEXPLORER : a semantic web browser

    Get PDF
    The Semantic Web will be the keystone in the creation of machine accessible domains of information scattered around the globe. All information on the World Wide Web will be semantically enhanced with metadata that makes sense to both human and intelligent information retrieval agents. For the Semantic Web to gain ground it is therefore very important that users are able to easily browse through such metadata. In line with such philosophy we are presenting SemantExplorer, a Semantic Web Browser that enables metadata browsing, provides visualization of different levels of metadata detail and allows for the integration of multiple information sources to provide a more complex and complete view of Web resources.peer-reviewe

    Building Knowledge Management System for Researching Terrorist Groups on the Web

    Get PDF
    Nowadays, terrorist organizations have found a cost-effective resource to advance their courses by posting high-impact Web sites on the Internet. This alternate side of the Web is referred to as the “Dark Web.” While counterterrorism researchers seek to obtain and analyze information from the Dark Web, several problems prevent effective and efficient knowledge discovery: the dynamic and hidden character of terrorist Web sites, information overload, and language barrier problems. This study proposes an intelligent knowledge management system to support the discovery and analysis of multilingual terrorist-created Web data. We developed a systematic approach to identify, collect and store up-to-date multilingual terrorist Web data. We also propose to build an intelligent Web-based knowledge portal integrated with advanced text and Web mining techniques such as summarization, categorization and cross-lingual retrieval to facilitate the knowledge discovery from Dark Web resources. We believe our knowledge portal provide counterterrorism research communities with valuable datasets and tools in knowledge discovery and sharing

    Neural networks and spectra feature selection for retrival of hot gases temperature profiles

    Get PDF
    Proceeding of: International Conference on Computational Intelligence for Modelling, Control and Automation, 2005 and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria 28-30 Nov. 2005Neural networks appear to be a promising tool to solve the so-called inverse problems focused to obtain a retrieval of certain physical properties related to the radiative transference of energy. In this paper the capability of neural networks to retrieve the temperature profile in a combustion environment is proposed. Temperature profile retrieval will be obtained from the measurement of the spectral distribution of energy radiated by the hot gases (combustion products) at wavelengths corresponding to the infrared region. High spectral resolution is usually needed to gain a certain accuracy in the retrieval process. However, this great amount of information makes mandatory a reduction of the dimensionality of the problem. In this sense a careful selection of wavelengths in the spectrum must be performed. With this purpose principal component analysis technique is used to automatically determine those wavelengths in the spectrum that carry relevant information on temperature distribution. A multilayer perceptron will be trained with the different energies associated to the selected wavelengths. The results presented show that multilayer perceptron combined with principal component analysis is a suitable alternative in this field.Publicad

    Modeling intelligent agents for web-based information gathering

    Full text link
    The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people\u27s perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments
    • …
    corecore