7,754 research outputs found

    Special issue on soft computing applications to intelligent information retrieval on the Internet

    Get PDF
    This special issue encompasses eleven papers devoted to the recent developments in the applications of soft computing (SC) techniques to information retrieval (IR), both in the text and Web retrieval areas. The seed of the current issue were some of the presentations made in two special sessions organized by the guest editors in two different conferences: the First Spanish Conference on Evolutionary and Bioinspired Algorithms (AEB’02), that was held in M erida, Spain, February 2002, and the Seventh International ISKO Conference (ISKO’02), held in Granada, Spain, July 2002. The scope of both special sessions was pretty related. In the former conference, the session topic was ‘‘Applications of Evolutionary Computation to Information Retrieval’’ while in the latter the session was entitled ‘‘Artificial Intelligence Applications to Information Retrieval’’

    Bayesian Logic Programs

    Full text link
    Bayesian networks provide an elegant formalism for representing and reasoning about uncertainty using probability theory. Theyare a probabilistic extension of propositional logic and, hence, inherit some of the limitations of propositional logic, such as the difficulties to represent objects and relations. We introduce a generalization of Bayesian networks, called Bayesian logic programs, to overcome these limitations. In order to represent objects and relations it combines Bayesian networks with definite clause logic by establishing a one-to-one mapping between ground atoms and random variables. We show that Bayesian logic programs combine the advantages of both definite clause logic and Bayesian networks. This includes the separation of quantitative and qualitative aspects of the model. Furthermore, Bayesian logic programs generalize both Bayesian networks as well as logic programs. So, many ideas developedComment: 52 page

    A Model-Based Approach to Impact Analysis Using Model Differencing

    Get PDF
    Impact analysis is concerned with the identification of consequences of changes and is therefore an important activity for software evolution. In modelbased software development, models are core artifacts, which are often used to generate essential parts of a software system. Changes to a model can thus substantially affect different artifacts of a software system. In this paper, we propose a modelbased approach to impact analysis, in which explicit impact rules can be specified in a domain specific language (DSL). These impact rules define consequences of designated UML class diagram changes on software artifacts and the need of dependent activities such as data evolution. The UML class diagram changes are identified automatically using model differencing. The advantage of using explicit impact rules is that they enable the formalization of knowledge about a product. By explicitly defining this knowledge, it is possible to create a checklist with hints about development steps that are (potentially) necessary to manage the evolution. To validate the feasibility of our approach, we provide results of a case study.Comment: 16 pages, 5 figures, In: Proceedings of the 8th International Workshop on Software Quality and Maintainability (SQM), ECEASST Journal, vol. 65 201

    Toward Self-Organising Service Communities

    Get PDF
    This paper discusses a framework in which catalog service communities are built, linked for interaction, and constantly monitored and adapted over time. A catalog service community (represented as a peer node in a peer-to-peer network) in our system can be viewed as domain specific data integration mediators representing the domain knowledge and the registry information. The query routing among communities is performed to identify a set of data sources that are relevant to answering a given query. The system monitors the interactions between the communities to discover patterns that may lead to restructuring of the network (e.g., irrelevant peers removed, new relationships created, etc.)

    A Call to Arms: Revisiting Database Design

    Get PDF
    Good database design is crucial to obtain a sound, consistent database, and - in turn - good database design methodologies are the best way to achieve the right design. These methodologies are taught to most Computer Science undergraduates, as part of any Introduction to Database class. They can be considered part of the "canon", and indeed, the overall approach to database design has been unchanged for years. Moreover, none of the major database research assessments identify database design as a strategic research direction. Should we conclude that database design is a solved problem? Our thesis is that database design remains a critical unsolved problem. Hence, it should be the subject of more research. Our starting point is the observation that traditional database design is not used in practice - and if it were used it would result in designs that are not well adapted to current environments. In short, database design has failed to keep up with the times. In this paper, we put forth arguments to support our viewpoint, analyze the root causes of this situation and suggest some avenues of research.Comment: Removed spurious column break. Nothing else was change

    Smart city for sustainable urban freight logistics

    Get PDF
    International audienc

    Implementing infrastructures for managing learning objects

    Get PDF
    Klemke, R., Ternier, S., Kalz, M., & Specht, M. (2010). Implementing infrastructures for managing learning objects. British Journal of Educational Technology, 41(6), 873-882. doi: 10.1111/j.1467-8535.2010.01127.x PrePrint Version. Original available at: http://dx.doi.org/10.1111/j.1467-8535.2010.01127.x Retrieved October 20, 2010.Making learning objects available is critical to reuse learning resources. Making content transparently available and providing added value to different stakeholders is among the goals of the European Commission's eContentPlus programme. This article analyses standards and protocols relevant for making learning objects accessible in distributed data provider networks. Types of metadata associated with learning objects and methods for metadata generation are discussed. Experiences from European projects highlight problems in implementing infrastructures and mapping metadata types into common application profiles. The use of learning contents and its associated metadata in different scenICOPER, Share.TEC, OpenScou

    Educational Considerations, vol. 28 (2) Full Issue

    Get PDF
    Educational Considerations, vol. 28 (2) Spring 2001 - Full issu

    Multi-dimensional data indexing and range query processing via Voronoi diagram for internet of things

    Get PDF
    In a typical Internet of Things (IoT) deployment such as smart cities and Industry 4.0, the amount of sensory data collected from physical world is significant and wide-ranging. Processing large amount of real-time data from the diverse IoT devices is challenging. For example, in IoT environment, wireless sensor networks (WSN) are typically used for the monitoring and collecting of data in some geographic area. Spatial range queries with location constraints to facilitate data indexing are traditionally employed in such applications, which allows the querying and managing the data based on SQL structure. One particular challenge is to minimize communication cost and storage requirements in multi-dimensional data indexing approaches. In this paper, we present an energy- and time-efficient multidimensional data indexing scheme, which is designed to answer range query. Specifically, we propose data indexing methods which utilize hierarchical indexing structures, using binary space partitioning (BSP), such as kd-tree, quad-tree, k-means clustering, and Voronoi-based methods to provide more efficient routing with less latency. Simulation results demonstrate that the Voronoi Diagram-based algorithm minimizes the average energy consumption and query response time
    • 

    corecore