18 research outputs found

    Knowledge society arguments revisited in the semantic technologies era

    No full text
    In the light of high profile governmental and international efforts to realise the knowledge society, I review the arguments made for and against it from a technology standpoint. I focus on advanced knowledge technologies with applications on a large scale and in open- ended environments like the World Wide Web and its ambitious extension, the Semantic Web. I argue for a greater role of social networks in a knowledge society and I explore the recent developments in mechanised trust, knowledge certification, and speculate on their blending with traditional societal institutions. These form the basis of a sketched roadmap for enabling technologies for a knowledge society

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Integrating large knowledge repositories in multiagent ontologies

    Get PDF
    Knowledge is people’s personal map of the world. According to the knowledge differences, it is possible different groups of people have different perceptions about the same reality. Each perception can be represented by using ontologies. In the research underlying this paper we are dealing with a multiple ontologies. In that context, each agent explores its own ontology. The goal of this research is to generate a common ontology including a common set of terms, based on the several ontologies available, in order to make possible to share the common terminology (set of terms) that it implements, between dif-ferent communities. In this paper we are presenting a real implementation of a system using those concepts. The paper provides a case study involving groups of people in different communities, managing data using different perceptions (terminologies), and different semantics to represent the same reality. Each user – belonging to a different community - uses different terminologies in collect-ing data and as a consequence they also get different results of that exercise. It is not a problem if the different results are used inside each community. The problem occurs if people need to take data from other communities, sharing, collaborating and using it to get a more global solution

    Construction Safety Ontology Development and Alignment with Industry Foundation Classes (IFC)

    Get PDF
    A pronounced gap often exists between expected and actual safety performance in the construction industry. The multifaceted causes of this performance gap are resulting from the misalignment between design assumptions and actual construction processes that take place on-site. In general, critical factors are rooted in the lack of interoperability around the building and work-environment information due to its heterogeneous nature. To overcome the interoperability challenge in safety management, this paper represents the development of an ontological model consisting of terms and relationships between these terms, creating a conceptual information model for construction safety management and linking that ontology to IfcOWL. The developed ontology, named Safety and Health Exchange (SHE), comprises eight concepts and their relationships required to identify and manage safety risks in the design and planning stages. The main concepts of the developed ontology are identified based on reviewing accident cases from 165 Reporting of Injuries, Diseases and Dangerous Occurrences Regulations (RIDDOR) and 31 Press Releases from the database of the Health and Safety Executive (HSE) in the United Kingdom. Consequently, a semantic mapping between the developed ontology and IfcOWL (the most popular ontology and schema for interoperability in the AEC sector) is proposed. Then several SPARQL queries were developed and implemented to evaluate the semantic consistency of the developed ontology and the cross-mapping. The proposed ontology and cross-mapping gained recognition for its innovation in utilising OpenBIM and won the BuildingSMART professional research award 2020. This work could facilitate developing a knowledge-based system in the BIM environment to assist designers in addressing health and safety issues during the design and planning phases in the construction sector

    The evolution of ontology in AEC: A two-decade synthesis, application domains, and future directions

    Get PDF
    Ontologies play a pivotal role in knowledge representation, particularly beneficial for the Architecture, Engineering, and Construction (AEC) sector due to its inherent data diversity and intricacy. Despite the growing interest in ontology and data integration research, especially with the advent of knowledge graphs and digital twins, a noticeable lack of consolidated academic synthesis still needs to be addressed. This review paper aims to bridge that gap, meticulously analysing 142 journal articles from 2000 to 2021 on the application of ontologies in the AEC sector. The research is segmented through systematic evaluation into ten application domains within the construction realm- process, cost, operation/maintenance, health/safety, sustainability, monitoring/control, intelligent cities, heritage building information modelling (HBIM), compliance, and miscellaneous. This categorisation aids in pinpointing ontologies suitable for various research objectives. Furthermore, the paper highlights prevalent limitations within current ontology studies in the AEC sector. It offers strategic recommendations, presenting a well-defined path for future research to address these gaps

    Formalizing ontology alignment and its operations with category theory

    Get PDF
    zimmermann2006aInternational audienceAn ontology alignment is the expression of relations between different ontologies. In order to view alignments independently from the language expressing ontologies and from the techniques used for finding the alignments, we use a category-theoretical model in which ontologies are the objects. We introduce a categorical structure, called V-alignment, made of a pair of morphisms with a common domain having the ontologies as codomain. This structure serves to design an algebra that describes formally what are ontology merging, alignment compo- sition, union and intersection using categorical constructions. This enables combining alignments of various provenance. Although the desirable properties of this algebra make such abstract manipulation of V-alignments very simple, it is practically not well fitted for expressing complex alignments: expressing subsumption between entities of two different ontologies demands the definition of non-standard categories of ontologies. We consider two approaches to solve this problem. The first one extends the notion of V-alignments to a more complex structure called W-alignments: a formalization of alignments relying on "bridge axioms". The second one relies on an elaborate concrete category of ontologies that offers high expressive power. We show that these two extensions have different advantages that may be exploited in different contexts (v

    Adaptivní personalizovaná navigace ƙízená metadaty

    Get PDF
    Import 11/04/2012V prĂĄci prezentujeme zpĂčsob navigace uĆŸivatele zaloĆŸenou na metadatech. PouĆŸĂ­vĂĄme prostor konceptĂč, kterĂœ popisuje problĂ©movou domĂ©nu pomocĂ­ konceptĂč a vztahĂč mezi nimi. Algoritmus pro ohodnocenĂ­ prostoru konceptĂč, kterĂœ definujeme umoĆŸĆˆuje ohodnotit kaĆŸdĂœ koncept v zĂĄvislosti na jeho poloze v prostoru konceptĂč a vazbĂĄch na dalĆĄĂ­ koncepty a jejich ohodnocenĂ­. KaĆŸdĂœ dokument, kterĂœ je prezentovĂĄn uĆŸivateli je navĂĄzĂĄn na prostor konceptĂč pomocĂ­ svĂ© mnoĆŸiny konceptĂč. Pro uĆŸivatele je uchovĂĄvanĂĄ mnoĆŸina dosaĆŸenĂœch znalostĂ­, kde jsou zaznamenĂĄvĂĄny znalosti uĆŸivatele ve formĂŹ znĂĄmĂœch konceptĂč. Navigace je provĂĄdĂŹna pro kaĆŸdĂ©ho uĆŸivatele samostatnĂŹ na zĂĄkladĂŹ jeho mnoĆŸiny dosaĆŸenĂœch znalostĂ­ a mnoĆŸin konceptĂč pƙísluĆĄnĂœch dokumentĂč. Pro vlastnĂ­ vĂœběr dokumentĂč a jejich setƙíděnĂ­ do personalizovanĂ©ho menu se vyuĆŸĂ­vajĂ­ metriky, kterĂ© implementujĂ­ rozdĂ­lnĂ© pƙístupy pro setƙíděnĂ­ dokumentĂč, napƙíklad na zĂĄkladě mohutnosti mnoĆŸiny konceptĂč, na zĂĄkladě ohodnocen Ă­ dokumentĂč podle mnoĆŸiny konceptĂč a dalĆĄĂ­. Dokumenty, kterĂ© pro uĆŸivatele vhodnĂ© jsou mu prezentovĂĄny ve formìě menu, setƙíděnĂ©ho podle vhodnosti. UĆŸivatelovo chovĂĄnĂ­ a jeho reakce jsou vyuĆŸĂ­vĂĄny k dalĆĄĂ­mu formovĂĄnĂ­ navigace.The work presents a scheme for navigating user based on metadata. We employ concept space that describes the problem domain by concepts and relations between them. Algorithm for concepts space evaluation was developed and is used to calculate an evaluation of a concept according its position and evaluation of its related concepts. Document which are presented to the users have defined a set of concepts which desribes them. For user an achived knowledge set is maintained where user's actual knowledge is stored in a form of known concepts. Navigation scheme takes the user's achived knowledge set and concepts sets of documents and chooses such documents which will be best beneficial for users in meaning of gaining new knowledge. For choosing the documents we employ numerous metrics which implement diferent approaches in learning of new knowledge such as choosing documents where user knows the most of the concepts and others or choosing document according concepts similarity to presented document. The chosen documents are presented to the user in a form of menu where the best suitable document in on the top. The user's behavior and reactions to the presented documents help to shape the navigation prepared for them.PrezenčnĂ­460 - Katedra informatikyvyhově

    A multi-matching technique for combining similarity measures in ontology integration

    Get PDF
    Ontology matching is a challenging problem in many applications, and is a major issue for interoperability in information systems. It aims to find semantic correspondences between a pair of input ontologies, which remains a labor intensive and expensive task. This thesis investigates the problem of ontology matching in both theoretical and practical aspects and proposes a solution methodology, called multi-matching . The methodology is validated using standard benchmark data and its performance is compared with available matching tools. The proposed methodology provides a framework for users to apply different individual matching techniques. It then proceeds with searching and combining the match results to provide a desired match result in reasonable time. In addition to existing applications for ontology matching such as ontology engineering, ontology integration, and exploiting the semantic web, the thesis proposes a new approach for ontology integration as a backbone application for the proposed matching techniques. In terms of theoretical contributions, we introduce new search strategies and propose a structure similarity measure to match structures of ontologies. In terms of practical contribution, we developed a research prototype, called MLMAR - Multi-Level Matching Algorithm with Recommendation analysis technique, which implements the proposed multi-level matching technique, and applies heuristics as optimization techniques. Experimental results show practical merits and usefulness of MLMA

    Arquitetura de um ambiente colaborativo de business intelligence baseado em um repositório de ontologias e serviços de dados

    Get PDF
    Dissertação (mestrado)—Universidade de BrasĂ­lia, Faculdade de Tecnologia, Departamento de Engenharia ElĂ©trica, 2012.O conceito de Business Intelligence (BI) refere-se a um conjunto de metodologias, mĂ©todos, ferramentas e software que sĂŁo usados a fim de fornecer soluçÔes sistĂȘmicas no apoio Ă  anĂĄlise de informaçÔes cujas especificaçÔes e desenvolvimentos sĂŁo limitados a domĂ­nios especĂ­ficos de informaçÔes. Em soluçÔes de BI convencionais, Ă© necessĂĄrio promover cargas massivas de dados fornecidos por outras organizaçÔes em repositĂłrios locais, o que pode tornar a informação nĂŁo disponĂ­vel ou causar erros devido Ă  mĂĄ interpretação dos dados recebidos. PropĂ”e-se neste trabalho, uma arquitetura sistĂȘmica de BI que busca soluçÔes para essas limitaçÔes. Esta arquitetura Ă© baseada em um repositĂłrio ontolĂłgico centralizado e utiliza serviços de dados descentralizados para fornecer dados para consultas analĂ­ticas genĂ©ricas. A proposta foi validada pelo desenvolvimento de uma prova de conceito que permite mostrar a arquitetura em ambiente funcional e ilustrar seu interesse em diversas aplicaçÔes de BI. ______________________________________________________________________________ ABSTRACTBusiness Intelligence (BI) refers to a set of methodologies, methods, tools and software that are used in order to provide system solutions to support information analysis. The specification and development of these system solutions are still limited to specific information domains. Furthermore, in conventional BI solutions, it is necessary to promote massive data loads provided by other organizations in local repositories. Such massive loads can make the information not available on-time or cause errors due to misinterpretation of received data. In this dissertation, a systemic architecture that seeks to address these limitations is proposed. The architecture is based on a centralized ontology repository and uses distributed data services to provide data to generic analytical queries. The proposal was validated by developing a proof of concept software that allows the architecture to implemented in an operational environment so as to ilustrate its interest for several BI applications
    corecore