1,082 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Disaster Data Management in Cloud Environments

    Get PDF
    Facilitating decision-making in a vital discipline such as disaster management requires information gathering, sharing, and integration on a global scale and across governments, industries, communities, and academia. A large quantity of immensely heterogeneous disaster-related data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in cloud computing, Big Data, and NoSQL have opened the door for new solutions in disaster data management. In this thesis, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM) with the objectives of 1) facilitating information gathering and sharing, 2) storing large amounts of disaster-related data from diverse sources, and 3) facilitating search and supporting interoperability and integration. Data are stored in a cloud environment taking advantage of NoSQL data stores. The proposed framework is generic, but this thesis focuses on the disaster management domain and data formats commonly present in that domain, i.e., file-style formats such as PDF, text, MS Office files, and images. The framework component responsible for addressing simulation models is SimOnto. SimOnto, as proposed in this work, transforms domain simulation models into an ontology-based representation with the goal of facilitating integration with other data sources, supporting simulation model querying, and enabling rule and constraint validation. Two case studies presented in this thesis illustrate the use of Disaster-CDM on the data collected during the Disaster Response Network Enabled Platform (DR-NEP) project. The first case study demonstrates Disaster-CDM integration capabilities by full-text search and querying services. In contrast to direct full-text search, Disaster-CDM full-text search also includes simulation model files as well as text contained in image files. Moreover, Disaster-CDM provides querying capabilities and this case study demonstrates how file-style data can be queried by taking advantage of a NoSQL document data store. The second case study focuses on simulation models and uses SimOnto to transform proprietary simulation models into ontology-based models which are then stored in a graph database. This case study demonstrates Disaster-CDM benefits by showing how simulation models can be queried and how model compliance with rules and constraints can be validated

    Knowledge extraction from web based application source code: an approach to database reverse engineering for ontology development

    Get PDF
    This paper presents a novel approach for extracting knowledge from web-based application source code in supplementing and assisting ontology development from database schemas. The structure of web-based application source code is defined in order to distinguish different kinds of knowledge within the source code for ontology development. The connections between the relevant parts of web application source code and the backend database schema with their various forms are explicitly specified in detail. A knowledge processing and integration model for extracting and integrating the knowledge embedded in the source code for ontology development is then proposed

    Collecting Open Source Intelligence via Tailored Information Delivery Systems

    Get PDF
    The Internet offers a plethora of freely available information for possible use in Open Source Intelligence (OSINT) operations. However, along with this information come challenges in finding relevant information and overcoming information overload. This paper presents the results of an ongoing research in a Tailored Information Delivery Services (TIDS) system that aids users in retrieving relevant information through various open intelligence sources. The TIDS provides a semantics-based query constructor that operates in a “What You Get is What You Need (WYGIWYNTM)” fashion and builds ontology based information tagging, theme extractor, and contextual model

    Knowledge as a Service Framework for Disaster Data Management

    Get PDF
    Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today’s society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models

    A Novel Approach to Ontology Management

    Get PDF
    The term ontology is defined as the explicit specification of a conceptualization. While much of the prior research has focused on technical aspects of ontology management, little attention has been paid to the investigation of issues that limit the widespread use of ontologies and the evaluation of the effectiveness of ontologies in improving task performance. This dissertation addresses this void through the development of approaches to ontology creation, refinement, and evaluation. This study follows a multi-paper model focusing on ontology creation, refinement, and its evaluation. The first study develops and evaluates a method for ontology creation using knowledge available on the Web. The second study develops a methodology for ontology refinement through pruning and empirically evaluates the effectiveness of this method. The third study investigates the impact of an ontology in use case modeling, which is a complex, knowledge intensive organizational task in the context of IS development. The three studies follow the design science research approach, and each builds and evaluates IT artifacts. These studies contribute to knowledge by developing solutions to three important issues in the effective development and use of ontologies

    Ontology and the Semantic Web

    Get PDF
    This paper discusses the development of a new information representation system embodied in ontology and the Semantic Web. The new system differs from other representation systems in that it is based on a more sophisticated semantic representation of information, aims to go well beyond the document level, and designed to be understood and processed by machine. A common theme underlying these three features, i.e., turning documents into meaningful interchangeable data, reflects a rising use expectation nurtured by modern technology and, at the same time, presents a unique challenge for its enabling technologies

    Geographic Information Systems for Real-Time Environmental Sensing at Multiple Scales

    Get PDF
    The purpose of this investigation was to design, implement, and apply a real-time geographic information system for data intensive water resource research and management. The research presented is part of an ongoing, interdisciplinary research program supporting the development of the Intelligent RiverÂź observation instrument. The objectives of this research were to 1) design and describe software architecture for a streaming environmental sensing information system, 2) implement and evaluate the proposed information system, and 3) apply the information system for monitoring, analysis, and visualization of an urban stormwater improvement project located in the City of Aiken, South Carolina, USA. This research contributes to the fields of software architecture and urban ecohydrology. The first contribution is a formal architectural description of a streaming environmental sensing information system. This research demonstrates the operation of the information system and provides a reference point for future software implementations. Contributions to urban ecohydrology are in three areas. First, a characterization of soil properties for the study region of the City of Aiken, SC is provided. The analysis includes an evaluation of spatial structure for soil hydrologic properties. Findings indicate no detectable structure at the scales explored during the study. The second contribution to ecohydrology comes from a long-term, continuous monitoring program for bioinfiltration basin structures located in the study area. Results include an analysis of soil moisture dynamics based on data collected at multiple depths with high spatial and temporal resolution. A novel metric is introduced to evaluate the long-term performance of bioinfiltration basin structures based on soil moisture observation data. Findings indicate a decrease in basin performance over time for the monitored sites. The third contribution to the field of ecohydrology is the development and application of a spatially and temporally explicit rainfall infiltration and excess model. The model enables the simulation and visualization of bioinfiltration basin hydrologic response at within-catchment scales. The model is validated against observed soil moisture data. Results include visualizations and stormwater volume calculations based on measured versus predicted bioinfiltration basin performance over time

    at the 14th Conference of the Spanish Association for Artificial Intelligence (CAEPIA 2011)

    Get PDF
    Technical Report TR-2011/1, Department of Languages and Computation. University of Almeria November 2011. JoaquĂ­n Cañadas, Grzegorz J. Nalepa, Joachim Baumeister (Editors)The seventh workshop on Knowledge Engineering and Software Engineering (KESE7) was held at the Conference of the Spanish Association for Artificial Intelligence (CAEPIA-2011) in La Laguna (Tenerife), Spain, and brought together researchers and practitioners from both fields of software engineering and artificial intelligence. The intention was to give ample space for exchanging latest research results as well as knowledge about practical experience.University of AlmerĂ­a, AlmerĂ­a, Spain. AGH University of Science and Technology, KrakĂłw, Poland. University of WĂŒrzburg, WĂŒrzburg, Germany
    • 

    corecore