1,699 research outputs found
Reasoning with Data Flows and Policy Propagation Rules
Data-oriented systems and applications are at the centre of current developments of the World Wide Web. In these scenarios, assessing what policies propagate from the licenses of data sources to the output of a given data-intensive system is an important problem. Both policies and data flows can be described with Semantic Web languages. Although it is possible to define Policy Propagation Rules (PPR) by associating policies to data flow steps, this activity results in a huge number of rules to be stored and managed. In a recent paper, we introduced strategies for reducing the size of a PPR knowledge base by using an ontology of the possible relations between data objects, the Datanode ontology, and applying the (A)AAAA methodology, a knowledge engineering approach that exploits Formal Concept Analysis (FCA). In this article, we investigate whether this reasoning is feasible and how it can be performed. For this purpose, we study the impact of compressing a rule base associated with an inference mechanism on the performance of the reasoning process. Moreover, we report on an extension of the (A)AAAA methodology that includes a coherency check algorithm, that makes this reasoning possible. We show how this compression, in addition to being beneficial to the management of the knowledge base, also has a positive impact on the performance and resource requirements of the reasoning process for policy propagation
Propagating Data Policies: a User Study
When publishing data, data licences are used to specify the actions that are permitted or prohibited, and the duties that target data consumers must comply with. However, in complex environments such as a smart city data portal, multiple data sources are constantly being combined, processed and redistributed. In such a scenario, deciding which policies apply to the output of a process based on the licences attached to its input data is a difficult, knowledge- intensive task. In this paper, we evaluate how automatic reasoning upon semantic representations of policies and of data flows could support decision making on policy propagation. We report on the results of a user study designed to assess both the accuracy and the utility of such a policy-propagation tool, in comparison to a manual approach
Recommended from our members
An Ontological formalization of the planning task
In this paper we propose a generic task ontology, which formalizes the space of planning problems. Although planning is one of the oldest researched areas in Artificial Intelligence and attempts have been made in the past at developing task ontologies for planning, these formalizations suffer from serious limitations: they do not exhibit the required level of formalization and precision and they usually fail to include some of the key concepts required for specifying planning problems. In con-trast with earlier proposals, our task ontology formalizes the nature of the planning task independently of any planning paradigm, specific domains, or applications and provides a fine-grained, precise and comprehensive characterization of the space of planning problems. Finally, in addition to producing a formal specification we have also operationalized the ontology into a set of executable definitions, which provide a concrete reusable resource for knowledge acquisition and system development in planning applications
Recommended from our members
An ontology for the description of and navigation through philosophical resources
What does it mean for a student to come to an understanding of a philosophical standpoint and can the explosion of resources now available on the web support this process, or is it inclined instead to create more confusion? We believe that a possible answer to the problem of finding a means through the morass of information on the web to the philosophical insights it conceals and can be made to reveal lies in the process of narrative pathway generation. That is, the active linking of resources into a learning path that contextualizes them with respect to one another. This result can be achieved only if the content of the resources is indexed, not just their status as a text document, an image or a video. To this aim, we propose a formal conceptualization of the domain of philosophy, an ontology that would allow the categorization of resources according to a series of pre-agreed content descriptors. Within an e-learning scenario, a teacher could use a tool comprising such an ontology to annotate at various levels of granularity available philosophical materials, and let the students explore this semantic space in an unsupervised manner, according to pre-defined narrative pathways
Rexplore: unveiling the dynamics of scholarly data
Rexplore is a novel system that integrates semantic technologies, data mining techniques, and visual analytics to provide an innovative environment for making sense of scholarly data. Its functionalities include: i) a variety of views to make sense of important trends in research; ii) a novel semantic approach for characterising research topics; iii) a very fine-grained expert search with detailed multi-dimensional parameters; iv) an innovative graph view to relate a variety of academic entities; iv) the ability to detect and explore the main communities within a research topic; v) the ability to analyse research performance at different levels of abstraction, including individual researchers, organizations, countries, and research communities
Semantic learning webs
By 2020, microprocessors will likely be as cheap and plentiful as scrap paper,scattered by the millions into the environment, allowing us to place intelligent systems everywhere. This will change everything around us, including the nature of commerce, the wealth of nations, and the way we communicate, work, play, and live. This will give us smart homes, cars, TVs , jewellery, and money. We will speak to our appliances, and they will speak back. Scientists also expect the Internet will wire up the entire planet and evolve into a membrane consisting of millions of computer networks, creating an “intelligent planet.” The Internet will eventually become a “Magic Mirror” that appears in fairy tales, able to speak with the wisdom of the human race.
Michio Kaku, Visions: How Science Will Revolutionize the Twenty - First Century, 1998
If the semantic web needed a symbol, a good one to use would be a Navaho dream-catcher: a small web, lovingly hand-crafted, [easy] to look at, and rumored to catch dreams; but really more of a symbol than a reality.
Pat Hayes, Catching the Dreams, 2002
Though it is almost impossible to envisage what the Web will be like by the end of the next decade, we can say with some certainty that it will have continued its seemingly unstoppable growth. Given the investment of time and money in the Semantic Web (Berners-Lee et al., 2001), we can also be sure that some form of semanticization will have taken place. This might be superficial - accomplished simply through the addition of loose forms of meta-data mark-up, or more principled – grounded in ontologies and formalised by means of emerging semantic web standards, such as RDF (Lassila and Swick, 1999) or OWL (Mc Guinness and van Harmelen, 2003). Whatever the case, the addition of semantic mark-up will make at least part of the Web more readily accessible to humans and their software agents and will facilitate agent interoperability.
If current research is successful there will also be a plethora of e-learning platforms making use of a varied menu of reusable educational material or learning objects. For the learner, the semanticized Web will, in addition, offer rich seams of diverse learning resources over and above the course materials (or learning objects) specified by course designers. For instance, the annotation registries, which provide access to marked up resources, will enable more focussed, ontologically-guided (or semantic) search. This much is already in development. But we can go much further. Semantic technologies make it possible not only to reason about the Web as if it is one extended knowledge base but also to provide a range of additional educational semantic web services such as summarization, interpretation or sense-making, structure-visualization, and support for argumentation
Language technologies and the evolution of the semantic web
The availability of huge amounts of semantic markup on the Web promises to enable a quantum leap in the level of support available to Web users for locating, aggregating, sharing, interpreting and customizing information. While we cannot claim that a large scale Semantic Web already exists, a number of applications have been produced, which generate and exploit semantic markup, to provide advanced search and querying functionalities, and to allow the visualization and management of heterogeneous, distributed data. While these tools provide evidence of the feasibility and tremendous potential value of the enterprise, they all suffer from major limitations, to do primarily with the limited degree of scale and heterogeneity of the semantic data they use. Nevertheless, we argue that we are at a key point in the brief history of the Semantic Web and that the very latest demonstrators already give us a glimpse of what future applications will look like. In this paper, we describe the already visible effects of these changes by analyzing the evolution of Semantic Web tools from smart databases towards applications that harness collective intelligence. We also point out that language technology plays an important role in making this evolution sustainable and we highlight the need for improved support, especially in the area of large-scale linguistic resources
Understanding research dynamics
Rexplore leverages novel solutions in data mining, semantic technologies and visual analytics, and provides an innovative environment for exploring and making sense of scholarly data. Rexplore allows users: 1) to detect and make sense of important trends in research; 2) to identify a variety of interesting relations between researchers, beyond the standard co-authorship relations provided by most other systems; 3) to perform fine-grained expert search with respect to detailed multi-dimensional parameters; 4) to detect and characterize the dynamics of interesting communities of researchers, identified on the basis of shared research interests and scientific trajectories; 5) to analyse research performance at different levels of abstraction, including individual researchers, organizations, countries, and research communities
Semantic web technology to support learning about the semantic web
This paper describes ASPL, an Advanced Semantic Platform for Learning, designed using the Magpie framework with an aim to support students learning about the Semantic Web research area. We describe the evolution of ASPL and illustrate how we used the results from a formal evaluation of the initial system to re-design the user functionalities. The second version of ASPL semantically interprets the results provided by a non-semantic web mining tool and uses them to support various forms of semantics-assisted exploration, based on pedagogical strategies such as performing later reasoning steps and problem space filtering
- …