1,109 research outputs found

    Engineering brain : metaverse for future engineering

    Get PDF
    The past decade has witnessed a notable transformation in the Architecture, Engineering and Construction (AEC) industry, with efforts made both in the academia and industry to facilitate improvement of efficiency, safety and sustainability in civil projects. Such advances have greatly contributed to a higher level of automation in the lifecycle management of civil assets within a digitalised environment. To integrate all the achievements delivered so far and further step up their progress, this study proposes a novel theory, Engineering Brain, by effectively adopting the Metaverse concept in the field of civil engineering. Specifically, the evolution of the Metaverse and its key supporting technologies are first reviewed; then, the Engineering Brain theory is presented, including its theoretical background, key components and their inter-connections. Outlooks of this theory’s implementation within the AEC sector are offered, as a description of the Metaverse of future engineering. Through a comparison between the proposed Engineering Brain theory and the Metaverse, their relationships are illustrated; and how Engineering Brain may function as the Metaverse for future engineering is further explored. Providing an innovative insight into the future engineering sector, this study can potentially guide the entire industry towards its new era based on the Metaverse environment

    Graph-based reasoning in collaborative knowledge management for industrial maintenance

    Get PDF
    Capitalization and sharing of lessons learned play an essential role in managing the activities of industrial systems. This is particularly the case for the maintenance management, especially for distributed systems often associated with collaborative decision-making systems. Our contribution focuses on the formalization of the expert knowledge required for maintenance actors that will easily engage support tools to accomplish their missions in collaborative frameworks. To do this, we use the conceptual graphs formalism with their reasoning operations for the comparison and integration of several conceptual graph rules corresponding to different viewpoint of experts. The proposed approach is applied to a case study focusing on the maintenance management of a rotary machinery system

    An ontology-based concept search model for data repository with limited information

    Get PDF
    Many of the problems in natural language processing (NLP) or information retrieval (IR) stem from the rich expressive power in natural language. The use of concept search to overcome the limitations of keyword search has been put forward as one of the motivations of the Semantic Web since its emergence in the late 90\u27s. A lot of efforts have been made to adapt concept search principles for improving the performance of information retrieval systems. However most approaches are designed for data repositories which contain a large number of items, each with rich information. We propose a knowledge based concept search method that is particularly designed for the data repository with limited information items by narrowing down a query into one concept. Also, a framework adapting this method is proposed to solve the practical problem about how to extract the information from a specification document into an existing unstructured database. As an important part of the framework, a concept selection method using genetic algorithms and semantic distance is proposed to filter the meaningless or less important information in query generation and matching process. An application development process in mould engineering domain is introduced as a case study to show how to use this framework. The experiment results show our proposed concept search method performs better than classical keyword based search especially for the documents with ambiguous words and the concept selection method has a potential to further improve this concept search method

    A FRAMEWORK FOR BIOPROFILE ANALYSIS OVER GRID

    Get PDF
    An important trend in modern medicine is towards individualisation of healthcare to tailor care to the needs of the individual. This makes it possible, for example, to personalise diagnosis and treatment to improve outcome. However, the benefits of this can only be fully realised if healthcare and ICT resources are exploited (e.g. to provide access to relevant data, analysis algorithms, knowledge and expertise). Potentially, grid can play an important role in this by allowing sharing of resources and expertise to improve the quality of care. The integration of grid and the new concept of bioprofile represents a new topic in the healthgrid for individualisation of healthcare. A bioprofile represents a personal dynamic "fingerprint" that fuses together a person's current and past bio-history, biopatterns and prognosis. It combines not just data, but also analysis and predictions of future or likely susceptibility to disease, such as brain diseases and cancer. The creation and use of bioprofile require the support of a number of healthcare and ICT technologies and techniques, such as medical imaging and electrophysiology and related facilities, analysis tools, data storage and computation clusters. The need to share clinical data, storage and computation resources between different bioprofile centres creates not only local problems, but also global problems. Existing ICT technologies are inappropriate for bioprofiling because of the difficulties in the use and management of heterogeneous IT resources at different bioprofile centres. Grid as an emerging resource sharing concept fulfils the needs of bioprofile in several aspects, including discovery, access, monitoring and allocation of distributed bioprofile databases, computation resoiuces, bioprofile knowledge bases, etc. However, the challenge of how to integrate the grid and bioprofile technologies together in order to offer an advanced distributed bioprofile environment to support individualized healthcare remains. The aim of this project is to develop a framework for one of the key meta-level bioprofile applications: bioprofile analysis over grid to support individualised healthcare. Bioprofile analysis is a critical part of bioprofiling (i.e. the creation, use and update of bioprofiles). Analysis makes it possible, for example, to extract markers from data for diagnosis and to assess individual's health status. The framework provides a basis for a "grid-based" solution to the challenge of "distributed bioprofile analysis" in bioprofiling. The main contributions of the thesis are fourfold: A. An architecture for bioprofile analysis over grid. The design of a suitable aichitecture is fundamental to the development of any ICT systems. The architecture creates a meaiis for categorisation, determination and organisation of core grid components to support the development and use of grid for bioprofile analysis; B. A service model for bioprofile analysis over grid. The service model proposes a service design principle, a service architecture for bioprofile analysis over grid, and a distributed EEG analysis service model. The service design principle addresses the main service design considerations behind the service model, in the aspects of usability, flexibility, extensibility, reusability, etc. The service architecture identifies the main categories of services and outlines an approach in organising services to realise certain functionalities required by distributed bioprofile analysis applications. The EEG analysis service model demonstrates the utilisation and development of services to enable bioprofile analysis over grid; C. Two grid test-beds and a practical implementation of EEG analysis over grid. The two grid test-beds: the BIOPATTERN grid and PlymGRID are built based on existing grid middleware tools. They provide essential experimental platforms for research in bioprofiling over grid. The work here demonstrates how resources, grid middleware and services can be utilised, organised and implemented to support distributed EEG analysis for early detection of dementia. The distributed Electroencephalography (EEG) analysis environment can be used to support a variety of research activities in EEG analysis; D. A scheme for organising multiple (heterogeneous) descriptions of individual grid entities for knowledge representation of grid. The scheme solves the compatibility and adaptability problems in managing heterogeneous descriptions (i.e. descriptions using different languages and schemas/ontologies) for collaborated representation of a grid environment in different scales. It underpins the concept of bioprofile analysis over grid in the aspect of knowledge-based global coordination between components of bioprofile analysis over grid

    Proceedings of the 1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020)

    Get PDF
    1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020), 29-30 August, 2020 Santiago de Compostela, SpainThe DC-ECAI 2020 provides a unique opportunity for PhD students, who are close to finishing their doctorate research, to interact with experienced researchers in the field. Senior members of the community are assigned as mentors for each group of students based on the student’s research or similarity of research interests. The DC-ECAI 2020, which is held virtually this year, allows students from all over the world to present their research and discuss their ongoing research and career plans with their mentor, to do networking with other participants, and to receive training and mentoring about career planning and career option

    Service-oriented architecture for device lifecycle support in industrial automation

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores Especialidade: Robótica e Manufactura IntegradaThis thesis addresses the device lifecycle support thematic in the scope of service oriented industrial automation domain. This domain is known for its plethora of heterogeneous equipment encompassing distinct functions, form factors, network interfaces, or I/O specifications supported by dissimilar software and hardware platforms. There is then an evident and crescent need to take every device into account and improve the agility performance during setup, control, management, monitoring and diagnosis phases. Service-oriented Architecture (SOA) paradigm is currently a widely endorsed approach for both business and enterprise systems integration. SOA concepts and technology are continuously spreading along the layers of the enterprise organization envisioning a unified interoperability solution. SOA promotes discoverability, loose coupling, abstraction, autonomy and composition of services relying on open web standards – features that can provide an important contribution to the industrial automation domain. The present work seized industrial automation device level requirements, constraints and needs to determine how and where can SOA be employed to solve some of the existent difficulties. Supported by these outcomes, a reference architecture shaped by distributed, adaptive and composable modules is proposed. This architecture will assist and ease the role of systems integrators during reengineering-related interventions throughout system lifecycle. In a converging direction, the present work also proposes a serviceoriented device model to support previous architecture vision and goals by including embedded added-value in terms of service-oriented peer-to-peer discovery and identification, configuration, management, as well as agile customization of device resources. In this context, the implementation and validation work proved not simply the feasibility and fitness of the proposed solution to two distinct test-benches but also its relevance to the expanding domain of SOA applications to support device lifecycle in the industrial automation domain

    CAPRI: A Common Architecture for Distributed Probabilistic Internet Fault Diagnosis

    Get PDF
    PhD thesisThis thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures. Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge.To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference. Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents. To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents, selecting diagnostic actions, and inferring a diagnosis using Bayesian inference and belief propagation.To demonstrate the ability of CAPRI to support distributed diagnosis of real-world failures, I implemented and deployed a prototype network of agents on Planetlab for diagnosing HTTP connection failures. Approximately 10,000 user agents and 40 distributed regional and specialist agents on Planetlab collect information from over 10,000 users and diagnose over 140,000 failures using a wide range of active and passive tests, including DNS lookup tests, connectivity probes, Rockettrace measurements, and user connection histories. I show how to improve accuracy and cost by learning new dependency knowledge and introducing new diagnostic agents. I also show that agents can manage the cost of diagnosing many similar failures by aggregating related requests and caching observations and beliefs
    corecore