22,169 research outputs found
Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine
Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)
A scientometric analysis and review of fall from height research in construction
Fall from height (FFH) in the construction industry has earned much attention among researchers in recent years. The present review-based study introduced a science mapping approach to evaluate the FFH studies related to the construction industry. This study, through an extensive bibliometric and scientometric assessment, recognized the most active journals, keywords and the nations in the field of FFH studies since 2000. Analysis of the authors’ keywords revealed the emerging research topics in the FFH research community. Recent studies have been discovered to pay more attention to the application of Computer and Information Technology (CIT) tools, particularly building information modelling (BIM) in research related to FFH. Other emerging research areas in the domain of FFH include rule checking, and prevention through design. The findings summarized the mainstream research areas (e.g., safety management program), discussed existing research gaps in FFH domain (e.g., the adaptability of safety management system), and suggests future directions in FFH research. The recommended future directions could contribute to improving safety for the FFH research community by evaluating existing fall prevention programs in different contexts; integrating multiple CIT tools in the entire project lifecycle; designing fall safety courses to workers associated with temporary agents and prototype safety knowledge tool development. The current study was restricted to the FFH literature sample included the journal articles published only in English and in Scopus
Recommended from our members
An ontology-based semantic building post-occupancy evaluation framework and its application
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonCatering to sustainable development in Architecture, Engineering and Construction (AEC) industry, many building performance evaluation (BPE) schemas have been developed to support building assessment and aim to narrow down the performance gap. Post-Occupancy Evaluation (POE), viewed as a sub-process of BPE, is a systematic method to obtain feedback on building performance in use. However, building evaluation is a complex and knowledge-intensive process with scattered and fragmented knowledge, it is time-consuming and error-prone to acquire explicit knowledge.
Benefiting from the advantages of Semantic Web technology in knowledge conceptualization, ontology, as the core of the Semantic Web, has been widely taken as an effective method for knowledge management, information representation and extraction, and logical inference in the AEC industry, especially in the BPE field. However, most of the existing ontologies in the AEC industry are lightweight ontologies that mainly focus on building a structured system to represent the specific domain knowledge or information, without developing formal axioms and constraints to provide higher expressivity. Moreover, the research focus of ontology in building assessment is mainly on energy-related fields, and there is not a comprehensive POE ontology yet, especially with the focus on building occupant satisfaction, which is the starting point of this research.
This research develops an ontology-based post-occupancy evaluation framework dedicated to building performance assessment, with the ultimate aim of optimizing building operation and improving building occupants' use experience quality and well-being. In the developed framework, a heavyweight ontology is developed to structure the fragmented building performance assessment knowledge in the POE domain. In POE ontology, the building occupants' needs for building performance are generalized and classified, and the corresponded building performance assessment knowledge is formalized. In addition, a set of SWRL (Semantic Web Rule Language) rules and SQWRL (Semantic Query-Enhanced Web Rule Language) query rules are developed based on the benchmarking evaluation axioms to enable automatic rule-based reasoning and query in different identified application scenarios. This ontology model enables effective POE-related knowledge retrieving and sharing, and promotes its implementation in the POE domain. To validate the developed framework, a case study is carried out facilitated by the Building Use Studies (BUS) Methodology to illustrate its feasibility and effectiveness in different application scenarios. This research concludes that the proposed ontology-based POE framework has the capability to conduct a multi-objective and multi-criteria POE assessment at the building operation stage and provide a multi-criteria optimised solution
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Automated compliance checking in healthcare building design
Regulatory frameworks associated to building design are usually complex, representing extensive sets of requirements. For healthcare projects in the UK, this includes statutory and guidance documents. Existing research indicates that they contain subjective requirements, which challenge the practical adoption of automated compliance checking, leading to limited outcomes. This paper aims to propose recommendations for the adoption of automated compliance checking in the design of healthcare buildings. Design Science Research was used to gain a detailed understanding of how information from existing regulatory requirements affects automation, through an empirical study in the design of a primary healthcare facility. In this study, a previously proposed taxonomy was implemented and refined, resulting in the identification of different types of subjective requirements. Based on empirical data emerging from the research, a set of recommendations was proposed focusing on the revision of regulatory documents, as well as to aid designers implementing automated compliance in practice
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Semantic framework for regulatory compliance support
Regulatory Compliance Management (RCM) is a management process, which an organization
implements to conform to regulatory guidelines. Some processes that contribute towards
automating RCM are: (i) extraction of meaningful entities from the regulatory text and (ii)
mapping regulatory guidelines with organisational processes. These processes help in updating
the RCM with changes in regulatory guidelines. The update process is still manual since there
are comparatively less research in this direction. The Semantic Web technologies are potential
candidates in order to make the update process automatic. There are stand-alone frameworks
that use Semantic Web technologies such as Information Extraction, Ontology Population,
Similarities and Ontology Mapping. However, integration of these innovative approaches in
the semantic compliance management has not been explored yet. Considering these two
processes as crucial constituents, the aim of this thesis is to automate the processes of RCM. It
proposes a framework called, RegCMantic.
The proposed framework is designed and developed in two main phases. The first part of the
framework extracts the regulatory entities from regulatory guidelines. The extraction of
meaningful entities from the regulatory guidelines helps in relating the regulatory guidelines
with organisational processes. The proposed framework identifies the document-components
and extracts the entities from the document-components. The framework extracts important
regulatory entities using four components: (i) parser, (ii) definition terms, (iii) ontological
concepts and (iv) rules. The parsers break down a sentence into useful segments. The
extraction is carried out by using the definition terms, ontological concepts and the rules in the
segments. The entities extracted are the core-entities such as subject, action and obligation, and
the aux-entities such as time, place, purpose, procedure and condition.
The second part of the framework relates the regulatory guidelines with organisational
processes. The proposed framework uses a mapping algorithm, which considers three types of
Abstract
3
entities in the regulatory-domain and two types of entities in the process-domains. In the
regulatory-domain, the considered entities are regulation-topic, core-entities and aux-entities.
Whereas, in the process-domain, the considered entities are subject and action. Using these
entities, it computes aggregation of three types of similarity scores: topic-score, core-score and
aux-score. The aggregate similarity score determines whether a regulatory guideline is related
to an organisational process.
The RegCMantic framework is validated through the development of a prototype system. The
prototype system implements a case study, which involves regulatory guidelines governing the
Pharmaceutical industries in the UK. The evaluation of the results from the case-study has
shown improved accuracy in extraction of the regulatory entities and relating regulatory
guidelines with organisational processes. This research has contributed in extracting
meaningful entities from regulatory guidelines, which are provided in unstructured text and
mapping the regulatory guidelines with organisational processes semantically
Ontology as Product-Service System: Lessons Learned from GO, BFO and DOLCE
This paper defends a view of the Gene Ontology (GO) and of Basic Formal Ontology (BFO) as examples of what the manufacturing industry calls product-service systems. This means that they are products (the ontologies) bundled with a range of ontology services such as updates, training, help desk, and permanent identifiers. The paper argues that GO and BFO are contrasted in this respect with DOLCE, which approximates more closely to a scientific theory or a scientific publication. The paper provides a detailed overview of ontology services and concludes with a discussion of some implications of the product-service system approach for the understanding of the nature of applied ontology. Ontology developer communities are compared in this respect with developers of scientific theories and of standards (such as W3C). For each of these we can ask: what kinds of products do they develop and what kinds of services do they provide for the users of these products
Development of an intelligent information resource model based on modern natural language processing methods
Currently, there is an avalanche-like increase in the need for automatic text processing, respectively, new effective methods and tools for processing texts in natural language are emerging. Although these methods, tools and resources are mostly presented on the internet, many of them remain inaccessible to developers, since they are not systematized, distributed in various directories or on separate sites of both humanitarian and technical orientation. All this greatly complicates their search and practical use in conducting research in computational linguistics and developing applied systems for natural text processing. This paper is aimed at solving the need described above. The paper goal is to develop model of an intelligent information resource based on modern methods of natural language processing (IIR NLP). The main goal of IIR NLP is to render convenient valuable access for specialists in the field of computational linguistics. The originality of our proposed approach is that the developed ontology of the subject area “NLP” will be used to systematize all the above knowledge, data, information resources and organize meaningful access to them, and semantic web standards and technology tools will be used as a software basis
- …