260,263 research outputs found

    Smart compliance checking frameworks for BIM standards

    Get PDF
    As a process of generating and managing the digital representation of a built asset, the concept of Building information modelling (BIM) has been gradually accepted and standards for implementing BIM have also been released. Compliance checking for BIM standards plays a significant role in the AEC (Architectural Engineering and Construction) industry regarding ensuring quality, safety, and efficiency during the delivery of projects. As the concept and technology of BIM are accepted and popularised, BIM standards have undergone rapid development in recent years and gradually shaped comprehensive standard systems, which makes fast and accurate BIM compliance checking increasingly challenging. Although considerable effort has been made to address this issue over the past decades, existing checking methods are flawed in terms of granularity, comprehensiveness, and automation. Moreover, these methods suffer from systemic deficiencies that make them incapable of fulfilling the demand of checking compliance against complex BIM standard systems under a rapidly evolving environment. To address the above gaps, a novel framework for BIM compliance checking is developed in this thesis, which is comprised of an ontology-driven subjective checking framework and an evidence-driven objective checking framework. The ontological checking framework adopts a domain ontology as the knowledge model and bridges users and domain knowledge through an interactive web-based service, which enables comprehensive and flexible BIM compliance checking. The evidence-driven checking framework leverages advanced NLP (Natural Language Processing) techniques, large language models, and graph learning to automictically extract information from regulations and project documents and convert it into knowledge graphs, then assess compliance via graph alignment. The main outcomes of the research lie in the development of the ontological checking framework and the automatic checking framework. The ontological checking framework divides domain knowledge and compliance assessment into two separate components enabling flexible checking for multiple scenarios. The adoption of ontology allows the accumulation and integration of domain knowledge. The evidence-driven checking framework has successfully implemented fully automated BIM compliance checking. This research significantly contributes to information extraction, knowledge engineering and BIM compliance checking in the AEC domain. The proposed ontological framework outperforms existing methods in terms of granularity and comprehensiveness and enables flexible compliance checking on various scenarios. As the first attempt, the evidence-driven checking framework fills the gap in fully automated BIM compliance checking. Due to the generic development principles adopted in this research, the proposed method and developed framework can be further extended for other research areas related to information extraction, knowledge extraction and compliance checking

    Timely and nonintrusive active document annotation via adaptive information extraction

    Get PDF
    The current work has been carried on in the framework of the AKT project (Advanced Knowledge Technologies, http://www.aktors.org), an Interdisciplinary Research Collaboration (IRC) sponsored by the UK Engineering and Physical Sciences Research Council (grant GR/N15764/01). AKT involves the Universities of Aberdeen, Edinburgh, Sheffield, Southampton and the Open University (www.aktors.org). AKT is a multimillion pound six year research project that started in 2000.The process of document annotation for the Semantic Web is complex and time consuming, as it requires a great deal of manual annotation. Information extraction from texts (IE) is a technology used by some of the most recent systems for actively supporting users in the process and reducing the burden of annotation. The integration of IE systems in annotation tools is quite a new development and in our opinion there is still the necessity of thinking the impact of the IE system in the process of annotation. In this paper we discuss two main requirements for active annotation: timeliness and tuning of intrusiveness. Then we present and discuss a model of interaction that addresses the two issues and Melita, an annotation framework that implements such methodology.peer-reviewe

    Development of an ontology for aerospace engine components degradation in service

    Get PDF
    This paper presents the development of an ontology for component service degradation. In this paper, degradation mechanisms in gas turbine metallic components are used for a case study to explain how a taxonomy within an ontology can be validated. The validation method used in this paper uses an iterative process and sanity checks. Data extracted from on-demand textual information are filtered and grouped into classes of degradation mechanisms. Various concepts are systematically and hierarchically arranged for use in the service maintenance ontology. The allocation of the mechanisms to the AS-IS ontology presents a robust data collection hub. Data integrity is guaranteed when the TO-BE ontology is introduced to analyse processes relative to various failure events. The initial evaluation reveals improvement in the performance of the TO-BE domain ontology based on iterations and updates with recognised mechanisms. The information extracted and collected is required to improve service k nowledge and performance feedback which are important for service engineers. Existing research areas such as natural language processing, knowledge management, and information extraction were also examined

    Simulation in manufacturing and business: A review

    Get PDF
    Copyright @ 2009 Elsevier B.V.This paper reports the results of a review of simulation applications published within peer-reviewed literature between 1997 and 2006 to provide an up-to-date picture of the role of simulation techniques within manufacturing and business. The review is characterised by three factors: wide coverage, broad scope of the simulation techniques, and a focus on real-world applications. A structured methodology was followed to narrow down the search from around 20,000 papers to 281. Results include interesting trends and patterns. For instance, although discrete event simulation is the most popular technique, it has lower stakeholder engagement than other techniques, such as system dynamics or gaming. This is highly correlated with modelling lead time and purpose. Considering application areas, modelling is mostly used in scheduling. Finally, this review shows an increasing interest in hybrid modelling as an approach to cope with complex enterprise-wide systems

    Ontology-driven conceptual modeling: A'systematic literature mapping and review

    Get PDF
    All rights reserved. Ontology-driven conceptual modeling (ODCM) is still a relatively new research domain in the field of information systems and there is still much discussion on how the research in ODCM should be performed and what the focus of this research should be. Therefore, this article aims to critically survey the existing literature in order to assess the kind of research that has been performed over the years, analyze the nature of the research contributions and establish its current state of the art by positioning, evaluating and interpreting relevant research to date that is related to ODCM. To understand and identify any gaps and research opportunities, our literature study is composed of both a systematic mapping study and a systematic review study. The mapping study aims at structuring and classifying the area that is being investigated in order to give a general overview of the research that has been performed in the field. A review study on the other hand is a more thorough and rigorous inquiry and provides recommendations based on the strength of the found evidence. Our results indicate that there are several research gaps that should be addressed and we further composed several research opportunities that are possible areas for future research

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web
    • …
    corecore