24,551 research outputs found

    Facets and Typed Relations as Tools for Reasoning Processes in Information Retrieval

    Full text link
    Faceted arrangement of entities and typed relations for representing different associations between the entities are established tools in knowledge representation. In this paper, a proposal is being discussed combining both tools to draw inferences along relational paths. This approach may yield new benefit for information retrieval processes, especially when modeled for heterogeneous environments in the Semantic Web. Faceted arrangement can be used as a se-lection tool for the semantic knowledge modeled within the knowledge repre-sentation. Typed relations between the entities of different facets can be used as restrictions for selecting them across the facets

    Towards an Intelligent Database System Founded on the SP Theory of Computing and Cognition

    Full text link
    The SP theory of computing and cognition, described in previous publications, is an attractive model for intelligent databases because it provides a simple but versatile format for different kinds of knowledge, it has capabilities in artificial intelligence, and it can also function like established database models when that is required. This paper describes how the SP model can emulate other models used in database applications and compares the SP model with those other models. The artificial intelligence capabilities of the SP model are reviewed and its relationship with other artificial intelligence systems is described. Also considered are ways in which current prototypes may be translated into an 'industrial strength' working system

    XML document design via GN-DTD

    Get PDF
    Designing a well-structured XML document is important for the sake of readability and maintainability. More importantly, this will avoid data redundancies and update anomalies when maintaining a large quantity of XML based documents. In this paper, we propose a method to improve XML structural design by adopting graphical notations for Document Type Definitions (GN-DTD), which is used to describe the structure of an XML document at the schema level. Multiples levels of normal forms for GN-DTD are proposed on the basis of conceptual model approaches and theories of normalization. The normalization rules are applied to transform a poorly designed XML document into a well-designed based on normalized GN-DTD, which is illustrated through examples

    Automating Fine Concurrency Control in Object-Oriented Databases

    Get PDF
    Several propositions were done to provide adapted concurrency control to object-oriented databases. However, most of these proposals miss the fact that considering solely read and write access modes on instances may lead to less parallelism than in relational databases! This paper cope with that issue, and advantages are numerous: (1) commutativity of methods is determined a priori and automatically by the compiler, without measurable overhead, (2) run-time checking of commutativity is as efficient as for compatibility, (3) inverse operations need not be specified for recovery, (4) this scheme does not preclude more sophisticated approaches, and, last but not least, (5) relational and object-oriented concurrency control schemes with read and write access modes are subsumed under this proposition

    Database independent Migration of Objects into an Object-Relational Database

    Get PDF
    This paper reports on the CERN-based WISDOM project which is studying the serialisation and deserialisation of data to/from an object database (objectivity) and ORACLE 9i.Comment: 26 pages, 18 figures; CMS CERN Conference Report cr02_01

    Livelisystems: a conceptual framework integrating social, ecosystem, development and evolutionary theory

    Get PDF
    Human activity poses multiple environmental challenges for ecosystems that have intrinsic value and also support that activity. Our ability to address these challenges is constrained, inter alia, by weaknesses in cross disciplinary understandings of interactive processes of change in socio-ecological systems. This paper draws on complementary insights from social and biological sciences to propose a ‘livelisystems’ framework of multi-scale, dynamic change across social and biological systems. This describes how material, informational and relational assets, asset services and asset pathways interact in systems with embedded and emergent properties undergoing a variety of structural transformations. Related characteristics of ‘higher’ (notably human) livelisystems and change processes are identified as the greater relative importance of (a) informational, relational and extrinsic (as opposed to material and intrinsic) assets, (b) teleological (as opposed to natural) selection, and (c) innovational (as opposed to mutational) change. The framework provides valuable insights into social and environmental challenges posed by global and local change, globalization, poverty, modernization, and growth in the anthropocene. Its potential for improving inter-disciplinary and multi-scale understanding is discussed, notably by examination of human adaptation to bio-diversity and eco-system service change following the spread of Lantana camera in the Western Ghats, India

    Analyzing high energy physics data using database computing: Preliminary report

    Get PDF
    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger
    • 

    corecore