9 research outputs found

    D-Fussion: a semantic selective disssemination of information service for the research community in digital libraries

    Get PDF
    Introduction. In this paper we propose a multi-agent Selective Dissemination of Information service to improve the research community's access to digital library resources. The service also provides a new recommendation approach to satisfy researchers' specific information requirements. Method. The service model is developed by jointly applying Semantic Web technologies (used to define rich descriptions of resources and a concept scheme that helps in indexing and retrieving tasks), fuzzy linguistic modelling techniques (both ordinal and 2-tuple-based approaches, that allow us to flexibly represent and handle information that is subject to a certain degree of uncertainty), as well as content-based and collaborative filtering techniques. Analysis. An experiment has been carried out to test the performance of the proposed model using a prototype and several experts have been asked to assess the recommendations provided by the system. Results. The outcomes of the experiment reveal that the proposed model is feasible and efficient in terms of precision and recall. Conclusions. Semantic Web technologies and fuzzy linguistic modelling provide the means to develop value-added services for digital libraries, which improve users' access to resources of interest to them. Furthermore, the recommendation approach here proposed allows researchers to satisfy specific information needs not covered by traditional recommender systems.Introducción. En este artículo proponemos de un servicio de Diseminación Selectiva de Información multi-agente para mejorar el acceso de la comunidad investigadora a los recursos de bibliotecas digitales. El servicio también proporciona una nueva aproximación a la recomendación para satisfacer los requerimientos de información específicos de los investigadores. Método. El modelo del servicio se desarrolla aplicando conjuntamente las tecnologías de la Web Semántica (usadas para definir descripciones ricas de recursos y un esquema de concepto que ayuden en las tareas de indización y recuperación), las técnicas de modelado lingüístico difuso (tanto la aproximación ordinal y como la basada en 2-tuplas que nos permiten representar y manejar flexiblemente información sujeta a un cierto grado de incertidumbre), así como las técnicas de filtrado basadas en contenido y colaborativas. Análisis. Se realizó un experimento para probar el rendimiento del modelo propuesto usando un prototipo y se han pedido a varios expertos que valoren las recomendaciones proporcionadas por el sistema. Resultados. Los resultados del experimento revelan que el modelo propuesto es factible y eficaz en términos de precisión y relevancia. Conclusiones. Las tecnologías de Web semántica y el modelado lingüístico difuso proporcionan los medios para desarrollar servicios de valor agregado para bibliotecas digitales que mejoran el acceso de los usuarios a los recursos de interés. Además, la aproximación de la recomendación aquí propuesta permite a los investigadores satisfacer necesidades de información específicas no cubiertas por los sistemas de recomendación tradicionales.The research reported here was supported by the Consejería de Innovación, Ciencia y Empresa. Junta de Andalucía, Spain (project SAINFOWEB - 00602) and the Ministerio de Educación y Ciencia, Spain (project FUZZYLING - TIN2007-61079)

    Modelo de servicio semántico de difusión selectiva de información (DSI) para bibliotecas digitales

    Get PDF
    We present the theoretical and methodological foundations for the development of a multi-agent SDI service model for specialized digital libraries, applying semantic web technologies that permit more efficient information management, improving agent-user communication processes and facilitating accurate access to relevant resources. To do this, rss feeds are used as "current awareness bulletins" to generate personalized bibliographic alerts. The SDI service model has an rss feeds management module and an information push module. In the first module, resources are represented as rss feed items and are also semi-automatically assigned subject terms by matching their associated keywords against the terms of a SKOS Core format thesaurus. In the information push module, bibliographic alerts are customized according to the preferences defined on users' profiles

    Data broadcast scheduling: Models, algorithms, and analysis

    Get PDF
    Inherent in the field of data broadcasting is a communication problem in which a server is to transmit a subset of data items in response to requests received from clients. The intent of the server is to optimize metrics quantifying the quality of service the system provides. This method of data dissemination has proved to be an efficient means of delivering information in asymmetric environments demanding massive scalability. of critical importance in such a system is the algorithm used by the server to construct a schedule of item broadcasts.;Due to the real-time nature of this problem, performances of heuristics designed to construct such schedules are heavily dependent on request instances. Thus it is challenging to establish the quality of one algorithm over another. Though several scheduling methods have been developed, these algorithms have been studied with a reliance on probabilistic assumptions and little emphasis on analytical results.;In contrast, we provide a formal treatment of the data broadcast scheduling problem in which analytical methods are applied, complemented by simulation experiments. Utilizing a worst-case technique known as competitive analysis, we establish bounds on the performance of various algorithms in the context of several different broadcast models. We describe results in three different settings.;Minimizing the total wait time of all requests with a single channel and multiple database items we establish the competitive ratios for two well-known algorithms, First Come First Served (FCFS) and Most Requests First (MRF) to be equal, and provide a general lower bound for all algorithms in this context. We describe simulation results that indicate the superior performance of MRF over FCFS on average. Minimizing two conflicting metrics, the total wait time and total broadcast cost, with a single channel and single database item we develop two on-line algorithms, establish their competitive ratios, and provide an optimal off-line algorithm used to simulate the impact of various parameters on the performance of both on-line heuristics. Finally, we extend the previous model by including multiple database items and establish a lower bound to a greedy algorithm for this context

    Lookahead scheduling in a real-time context: Models, algorithms, and analysis

    Get PDF
    Our research considers job scheduling, a special type of resource assignment problem. For example, at a cross-docking facility trucks must be assigned to doors where they will be unloaded. The cargo on each truck has various destinations within the facility, and the unloading time for a truck is dependent on the distance from the assigned door to these destinations. The goal is to assign the trucks to doors while minimizing the amount of time to unload all trucks.;We study scheduling algorithms for problems like the cross-docking example that are different from traditional algorithms in two ways. First, we utilize real-time, where the algorithm executes at the same time as when the jobs are handled. Because the time used by the algorithm to make decisions cannot be used to complete a job, these decisions must be made quickly Second, our algorithms utilize lookahead, or partial knowledge of jobs that will arrive in the future.;The three goals of this research were to demonstrate that lookahead algorithms can be implemented effectively in a real-time context, to measure the amount of improvement gained by utilizing lookahead, and to explore the conditions in which lookahead is beneficial.;We present a model suitable for representing problems that include lookahead in a real-time context. Using this model, we develop lookahead algorithms for two important job scheduling systems and argue that these algorithms make decisions efficiently. We then study the performance of lookahead algorithms using mathematical analysis and simulation.;Our results provide a detailed picture of the behavior of lookahead algorithms in a real-time context. Our analytical study shows that lookahead algorithms produce schedules that are significantly better than those without lookahead. We also found that utilizing Lookahead-1, or knowledge of the next arriving job, produces substantial improvement while requiring the least effort to design. When more lookahead information is used, the solutions are better, but the amount of improvement is not significantly larger than a Lookahead-1 algorithm. Further, algorithms utilizing more lookahead are more complex to design, implement, and analyze. We conclude that Lookahead-1 algorithms are the best balance between improvement and design effort

    Research in Mobile Database Query Optimization and Processing

    Get PDF

    Um estudo das estratégias de replicação e reconciliação de banco de dados móveis em um ambiente wireless

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.Neste trabalho é apresentada uma análise de diferentes estratégias de replicação e reconciliação de dados baseando-se em um estudo de caso experimental, considerando dois ambientes operacionais convencionais sob o paradigma da comunicação wireless. Os resultados indicam soluções para o desenvolvimento de sistemas de banco de dados móveis que operam em condições de comunicação descontinuada

    Pervasive handheld computing systems

    Get PDF
    The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems

    Sequencing geographical data for efficient query processing on air in mobile computing.

    Get PDF
    Three cost models are derived to measure Data Broadcast Wait (DBW), Data Access Time in the multiplexing scheme (ATDataMul) where both data and indices are broadcast in the same channel, and Data Access Time in the separate channel scheme (ATDataSep) where data and indices are broadcast in two separate channels. Hypergraph representations are used to represent the spatial relationships of both point data and graph data. The broadcast data placement problem is then converted to the graph layout problem. A framework for classifying ordering heuristics for different types of geographical data is presented. A low-polynomial cost approximation graph layout method is used to solve the DBW minimization problem. Based on the proven monotonic relationship between ATData Sep and DBW, the same approximation method is also used for AT DataSep optimization. A novel method is developed to optimize ATDataMul. Experiments using both synthetic and real data are conducted to evaluate the performance of the ordering heuristics and optimization methods. The results show that R-Tree traversal ordering heuristic in conjunction with the optimization methods is effective for sequencing point data for spatial range query processing, while graph partition tree traversal ordering heuristic in conjunction with the optimization methods is suitable for sequencing graph data for network path query processing over air.Geographical data broadcasting is suitable for many large scale dissemination-based applications due to its independence of number of users, and thus it can serve as an important part of intelligent information infrastructures for modern cities. In broadcast systems, query response time is greatly affected by the order in which data items are being broadcast. However, existing broadcast ordering techniques are not suitable for geographical data because of the multi-dimension and rich semantics of geographical data. This research develops cost models and methods for placing geographical data items in a broadcast channel based on their spatial semantics to reduce response time and energy consumption for processing spatial queries on point data and graph data
    corecore