14 research outputs found

    Reusability ontology in business processes with similarity matching

    Get PDF
    The working technology will provide information and knowledge. Information and technology can be developed in various ways, by reusing the technologies. In this study modeled the ontology of SOPs using protégé. Ontology will be matched between ontology A and B to obtain similarity and reuse ontology to create a more optimal ontology. Matching is a matching process between both ontologies to get the same value from both ontologies. Jaro-Winkler distance is used to find commonality between ontology. The result of the Jaro-Winkler distance has a value of 0 and 1, in matching will be obtained value close to 0 or 1. On matching ontology obtained two tests using 40% SPARQL query. In the test it uses Jaro-Winkler distance with a value of 0.67. This research yields matching value between ontology A and ontology B which is the same so that reuse ontology can be done for better ontolog

    Metadata behind the Interoperability of Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability

    BIM : new rules of measurement ontology for construction cost estimation

    Get PDF
    For generations, the process of cost estimation has been manual, time-consuming and errorprone. Emerging Building Information Modelling (BIM) can exploit standard measurement methods to automate cost estimation process and improve inaccuracies. Structuring standard measurement methods in an ontologically and machine readable format for a BIM software can greatly facilitate the process of improving inaccuracies in cost estimation. This study explores the development of an ontology based on New Rules of Measurement (NRM) for cost estimation during the tendering stages. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. To ensure the ontology is fit for purpose, cost estimation experts are employed to check the semantics, descriptive logicbased reasoners are used to syntactically check the ontology and a leading 4D BIM modelling software is used on a case study building to test/validate the proposed ontology

    GQ-BPAOntoSOA: A goal- and object- based semantic framework for deriving software services from an organisation’s goals and riva business process architecture

    Get PDF
    Understanding a business organisation is a primary activity that is required for deriving service-oriented systems that assist in carrying out the business activities of an organisation. Business IT alignment is one of the hot topics that concerns with aligning business needs and system needs in order to keep a business organisation competitive in a market. One example in this area is the BPAOntoSOA framework that aligned business process architecture and the service-oriented model of computing. The BPAOntoSOA framework is a semantically enriched framework for deriving service oriented architecture candidate software services from a Riva-based business process architecture. The BPAOntoSOA framework was recently proposed in order to align the candidate software services to the business processes presented in a Riva business process architecture. The activities of the BPAOntoSOA framework are structured into two-semantic-based layers that are formed in a top-down manner. The top layer, the BPAOnt ontology instantiation layer, concerned with conceptualising the Riva business process architecture and the associated business process models. The bottom layer, which is the software service identification layer, concerned with the semantic identification of the service-oriented architecture candidate software services and their associated capabilities. In this layer, RPA clusters were used to describe the derived candidate software service. Ontologies were used in order to support addressing the semantic representation. However, the BPAOntoSOA framework has two limitations. First, the derived candidate software services are identified without considering the business goals. Second, the desired quality of service requirements that constrain the functionality of the software services are absent. This research is concerned with resolving these two limitations within the BPAOntoSOA framework. In this research, the original BPAOntoSOA framework has been extended into the GQ-BPAOntoSOA framework. A new semantic-based layer has been added into the two original layers. The new layer is concerned with conceptualising the goal- and quality- oriented models in order to address their absence in the original BPAOntoSOA framework. The new layer is called the GQOnt ontology instantiation layer. This extension has highlighted the need for aligning the models within the original BPAOnt intonation layer with the ones in the new layer. This is because the BPAOnt was the base for the identification of the candidate software services and capabilities. Therefore, a novel alignment approach has been proposed in order to address this need. Also, the original service identification approach is refined in order to adapt with the integration of goals and quality requirements.The GQ-BPAOntoSOA framework, which is a goal-based and quality-linked extended BPAOntoSOA framework, has been evaluated using the Cancer Care Registration process. This is the same case study used in the evaluation of the BPAOntoSOA framework. And this is required in order to investigate the implication of integrating goals and quality requirements into the pre-existing BPAOntoSOA framework-driven candidate software services. This has shown that: (1) the GQOnt ontology does not only contribute to the extension of the BPAOntoSOA framework, yet it also contributes to providing a semantic representation of a business strategy view for an organisation. The GQOnt ontology acts as an independent repository of knowledge in order to have an early agreement between stakeholders with regard to business goals and quality requirements. The semantic representation could be reused for different purposes with respect to the needs. (2) the alignment approach has bridged the gap between goal-oriented models and Riva-based business process architectures. (3) the Riva business process architecture modelling method and business process models have been enriched with the integration of goals and quality requirements in order to provide a rich representation of business process architecture and process models that reflect an important information for the given organisation. (4) The service identification approach used in the original BPAOntoSOA framework has been enriched with goals and quality requirements. This has affected the identification of candidate software services (clusters) and their capabilities. Also, the derived candidate software services have conformed to service-oriented architecture principles. Accordingly, This research has bridged the gap between the BPAOntoSOA framework and the business goals and quality requirements. This is anticipated to lead to highly consistent, correct and complete software service specifications

    Database marketing intelligence methodology supported by ontologies and knowlegde discovery in databases

    Get PDF
    Tese de doutoramento em Tecnologias e Sistemas de InformaçãoActualmente as organizações actuam em ambientes caracterizados pela inconstância, elevada competitividade e pressão no desenvolvimento de novas abordagens ao mercado e aos clientes. Nesse contexto, o acesso à informação, o suporte à tomada de decisão e a partilha de conhecimento tornam-se essenciais para o desempenho organizativo. No domínio do marketing têm surgido diversas abordagens para a exploração do conteúdo das suas bases de dados. Uma das abordagens, utilizadas com maior sucesso, tem sido o processo para a descoberta de conhecimento em bases de dados. Por outro lado, a necessidade de representação e partilha de conhecimento tem contribuído para um crescente desenvolvimento das ontologias em áreas diversas como sejam medicina, aviação ou segurança. O presente trabalho cruza diversas áreas: tecnologias e sistemas de informação (em particular a descoberta de conhecimento), o marketing (especificamente o database marketing) e as ontologias. O objectivo principal desta investigação foca o papel das ontologias em termos de suporte e assistência ao processo de descoberta de conhecimento em bases de dados num contexto de database marketing. Através de abordagens distintas foram formuladas duas ontologias: ontologia para o processo de descoberta de conhecimento em bases de dados e, a ontologia para o processo database marketing suportado na extracção de conhecimento em bases de dados (com reutilização da ontologia anterior). O processo para licitação e validação de conhecimento, baseou-se no método de Delphi (ontologia de database marketing) e no processo de investigação baseada na revisão de literatura (ontologia de descoberta de conhecimento). A concretização das ontologias suportou-se em duas metodologias: metodologia methontology, para a ontologia de descoberta de conhecimento e metodologia 101 para a ontologia de database marketing. A última, evidencia a reutilização de ontologias, viabilizando assim a reutilização da ontologia de descoberta de conhecimento na ontologia de database marketing. Ambas ontologias foram desenvolvidas sobre a ferramenta Protege-OWL permitindo não só a criação de toda a hierarquia de classes, propriedades e relações, como também, a realização de métodos de inferência através de linguagens baseadas em regras de Web semântica. Posteriormente, procedeu-se à experimentação da ontologia em casos práticos de extracção de conhecimento a partir de bases de dados de marketing. O emprego das ontologias neste contexto de investigação, representa uma abordagem pioneira e inovadora, uma vez que são propostas para assistirem em cada uma das fases do processo de extracção de conhecimento em bases de dados através de métodos de inferência. È assim possível assistir o utilizador em cada fase do processo de database marketing em acções tais como de selecção de actividades de marketing em função dos objectivos de marketing (e.g., perfil de cliente), em acções de selecção dados (e.g., tipos de dados a utilizar em função da actividade a desenvolver) ou mesmo no processo de selecção de algoritmos (e.g. inferir sobre o tipo de algoritmo a usar em função do objectivo definido). A integração das duas ontologias num contexto mais lato permite, propor uma metodologia com vista ao efectivo suporte do processo de database marketing baseado no processo de descoberta de conhecimento em bases de dados, denominado nesta dissertação como: Database Marketing Intelligence. Para a demonstração da viabilidade da metodologia proposta foi seguido o método action-research com o qual se observou e testou o papel das ontologias no suporte à descoberta de conhecimento em bases de dados (através de um caso prático) num contexto de database marketing. O trabalho de aplicação prática decorreu sobre uma base de dados real relativa a um cartão de fidelização de uma companhia petrolífera a operar em Portugal. Os resultados obtidos serviram para demonstrar em duas vertente o sucesso da abordagem proposta: por um lado foi possível formalizar e acompanhar todo o processo de descoberta de conhecimento em bases de dados; por outro lado, foi possível perspectivar uma metodologia para um domínio concreto suportado por ontologias (suporte á decisão na selecção de métodos e tarefas) e na descoberta de conhecimento em bases de dados.Nowadays, the environment in which companies work is turbulent, very competitive and pressure in the development of new approaches to the market and clients. In this context, the access to information, the decision support and knowledge sharing become essential for the organization performance. In the marketing domain several approaches for the exploration of database exploration have emerged. One of the most successfully used approaches has been the knowledge discovery process in databases. On the other hand, the necessity of knowledge representation and sharing and contributed to a growing development of ontologies in several areas such as in the medical, the aviation or safety areas. This work crosses several areas: technology and information systems (specifically knowledge discovery in databases), marketing (specifically database marketing) and ontologies in general. The main goal of this investigation is to focus on the role of ontologies in terms of support and aid to the knowledge discovery process in databases in a database marketing context. Through distinct approaches two ontologies were created: ontology for the knowledge discovery process in databases, and the ontology for the database marketing process supported on the knowledge extraction in databases (reusing the former ontology). The elicitation and validation of knowledge process was based on the Delphi method (database marketing ontology) and the investigation process was based on literature review (knowledge discovery ontology). The carrying out of both ontologies was based on two methodologies: methontology methodology, for the knowledge discovery process and 101 methodology for the database marketing ontology. The former methodology, stresses the reusing of ontologies, allowing the reusing of the knowledge discovery ontology in the database marketing ontology. Both ontologies were developed with the Protege-OWL tool. This tool allows not only the creation of all the hierarchic classes, properties and relationships, but also the carrying out of inference methods through web semantics based languages. Then, the ontology was tested in practical cases of knowledge extraction from marketing databases. The application of ontologies in this investigation represents a pioneer and innovative approach, once they are proposed to aid and execute an effective support in each phase of the knowledge extraction from databases in the database marketing context process. Through inference processes on the knowledge base created it was possible to assist the user in each phase of the database marketing process such as, in marketing activity selection actions according to the marketing objectives (e.g., client profile) or in data selection actions (e.g., type of data to use according to the activity to be preformed. In relation to aid in the knowledge discovery process in databases, it was also possible to infer on the type of algorithm to use according to the defined objective or even according to the type of data pre-processing activities to develop regarding the type of data and type of attribute information. The integration of both ontologies in a more general context allows proposing a methodology aiming to the effective support of the database marketing process based on the knowledge discovery process in databases, named in this dissertation as: Database Marketing Intelligence. To demonstrate the viability of the proposed methodology the action-research method was followed with which the role of ontologies in assisting knowledge discovery in databases (through a practical case) in the database marketing context was observed and tested. For the practical application work a real database about a customer loyalty card from a Portuguese oil company was used. The results achieved demonstrated the success of the proposed approach in two ways: on one hand, it was possible to formalize and follow the whole knowledge discovery in databases process; on the other hand, it was possible to perceive a methodology for a concrete domain supported by ontologies (support of the decision in the selection of methods and tasks) and in the knowledge discovery in databases.Fundação para a Ciência e a Tecnologia (FCT) - SFRH/BD/36541/200

    Knowledge-based Methods for Integrating Carbon Footprint Prediction Techniques into New Product Designs and Engineering Changes.

    Full text link
    This dissertation presents research focusing on the development of knowledge-based techniques of assessing the carbon footprint during new product creation. This research aims to transform the current time-consuming, off-line and reactive approach into an integrated proactive approach that relies on using fast estimates of sustainability generated from past computations on similar products. The developed methods address multiple challenges by leveraging the latest advancements in open standards and software capabilities from machine learning and data mining to support integration and early decision-making using generic knowledge of the product development field. Life-Cycle Assessment (LCA)-based carbon footprint calculation typically starts by analyzing the product functions. However, the lack of a semantically correct formal representation of product functions is a barrier to their effective capture and reuse. We first identified the advanced semantics that must be captured to ensure appropriate usability for reasoning with product functions. We captured them into a Function Semantics Representation that relies on the Semantic Web Rule Language, a proposed Semantic Web standard, to overcome limitations posed due to the commonly used Web Ontology Language. Several products are developed as Engineering Changes (ECs) of previous products but there is not enough data to predict the carbon footprint available before their implementation. In order to use past EC knowledge to predict for this purpose, we proposed an approach to compute similarity between ECs that overcame the challenge of the hierarchical nature of product knowledge by integrating an approach inspired from research in psychology with semantics specific to product development. We embedded this into a parallelized Ant-Colony based clustering algorithm for faster retrieval of a group of similar ECs. We are not aware of approaches to predict the carbon footprint of an EC or a proposed design right after the proposal. In order to reuse carbon footprint information from past designs and engineering changes, key parameters were determined to represent lifecycle attributes. The carbon footprint is predicted through a surrogate LCA technique developed using case-based reasoning and boosted-learning.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78846/1/scyang_1.pd

    Mobile sensor networks for environmental monitoring

    Get PDF
    Vulnerability to natural disasters and the human pressure on natural resources have increased the need for environmental monitoring. Proper decisions, based on real-time information gathered from the environment, are critical to protecting human lives and natural resources. To this end, mobile sensor networks, such as wireless sensor networks, are promising sensing systems for flexible and autonomous gathering of such information. Mobile sensor networks consist of geographically deployed sensors very close to a phenomenon of interest. The sensors are autonomous, self-configured, small, lightweight and low powered, and they become mobile when they are attached to mobile objects such as robots, people or bikes. Research on mobile sensor networks has focused primarily on using sensor mobility to reduce the main sensor network limitations in terms of network topology, connectivity and energy conservation. However, how sensor mobility could improve environmental monitoring still remains largely unexplored. Addressing this requires the consideration of two main mobility aspects: sampling and mobility constraints. Sampling is about where mobile sensors should be moved, while mobility constraints are about how such movements should be handled, considering the context in which the monitoring is carried out. This thesis explores approaches for sensor mobility within a wireless sensor network for use in environmental monitoring. To achieve this goal, four sub-objectives were defined: Explore the use of metadata to describe the dynamic status of sensor networks. Develop a mobility constraint model to infer mobile sensor behaviour. Develop a method to adapt spatial sampling using mobile, constrained sensors. Extend the developed adaptive sampling method to monitoring highly dynamic environmental phenomena. Chapter 2 explores the use of metadata to describe the dynamic status of sensor networks. A context model was proposed to describe the general situation in which a sensor network is monitoring. The model consists of four types of contexts: sensor, network, sensing and organisation, where each of the contexts describes the sensor network from a different perspective. Metadata, which are descriptors of observed data, sensor configurations and functionalities, are used as parameters to describe what is happening in the different contexts. The results reveal that metadata are suitable for describing sensor network status within different contexts and reporting the status back to other components, systems or users. Chapter 3 develops a model which describes mobility constraints for inferring mobile sensor behaviour. The proposed mobility constraint model consists of three components: first, the context typology proposed in Chapter 2 to describe mobility constraints within the different contexts; second, a context graph, modelled as a Bayesian network, to encode dependencies of mobility constraints within the same or different contexts, as well as among mobility constraints and sensor behaviour; and third, contextual rules to encode how dependent mobility constraints are expected to constrain sensor behaviour. Metadata values for the monitored phenomenon and sensor properties are used to feed the context graph. They are propagated through the graph structure, and the contextual rules are used to infer the most suitable behaviour. The model was used to simulate the behaviour of a mobile sensor network to obtain a suitable spatial coverage in low and high fire risk scenarios. It was shown that the mobility constraint model successfully inferred behaviour, such as sleeping sensors, moving sensors and deploying more sensors to enhance spatial coverage. Chapter 4 develops a spatial sampling strategy for use with mobile, constrained sensors according to the expected value of information (EVoI) and mobility constraints. EVoI allows decisions to be made about the location to observe. It minimises the expected costs of wrong predictions about a phenomenon using a spatially aggregated EVoI criterion. Mobility constraints allow decisions to be made about which sensor to move. A cost-distance criterion is used to minimise unwanted effects of sensor mobility on the sensor network itself, such as energy depletion. The method was assessed by comparing it with a random selection of sample locations combined with sensor selection based on a minimum Euclidian distance criterion. The results demonstrate that EVoI enables selection of the most informative locations, while mobility constraints provide the needed context for sensor selection. Chapter 5 extends the method developed in Chapter 4 for the case of highly dynamic phenomena. It develops a method for deciding when and where to sample a dynamic phenomenon using mobile sensors. The optimisation criterion is to maximise the EVoI from a new sensor deployment at each time step. The method was demonstrated in a scenario in which a simulated fire in a chemical factory released polluted smoke into the open air. The plume varied in space and time because of variations in atmospheric conditions and could be only partially predicted by a deterministic dispersion model. In-situ observations acquired by mobile sensors were considered to improve predictions. A comparison with random sensor movements and the previous sensor deployment without performing sensor movements shows that the optimised sensor mobility successfully reduced risk caused by poor model predictions. Chapter 6 synthesises the main findings and presents my reflections on the implications of such findings. Mobile sensors for environmental monitoring are relevant to improving monitoring by selecting sampling locations that deliver the information that most improves the quality of decisions for protecting human lives and natural resources. Mobility constraints are relevant to managing sensor mobility within sampling strategies. The traditional consideration of mobility constraints within the field of computer sciences mainly leads to sensor self-protection rather than to the protection of human beings and natural resources. By contrast, when used for environmental monitoring, mobile sensors should above all improve monitoring performance, even thought this might produce negative effects on coverage, connectivity or energy consumption. Thus, mobility constraints are useful for reducing such negative effects without constraining the sampling strategy. Although sensor networks are now a mature technology, they are not yet widely used by surveyors and environmental scientists. The operational use of sensor networks in geo-information and environmental sciences therefore needs to be further stimulated. Although this thesis focuses on wireless sensor network, other types of informal sensor networks could be also relevant for environmental monitoring, such as smart phones, volunteer citizens and sensor web. Finally, the following recommendations are given for further research: extend the sampling strategy for dynamic phenomena to take account of mobility constraints; develop sampling strategies that take a decentralised approach; focus on mobility constraints related to connectivity and data transmission; elicit expert knowledge to reveal preferences for sensor mobility under mobility constraints within different types of environmental applications; and validate the proposed strategies in operational implementations. </p

    Ontologia para representação semântica de indicadores de desempenho considerando aspectos de vaguidade, temporalidade e relacionamento entre indicadores

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia e Gestão do Conhecimento, Florianópolis, 2016.Os indicadores são amplamente utilizados pelas organizações como forma de avaliar, medir e classificar o desempenho organizacional. Parte integrante de sistemas de avaliação de desempenho, os indicadores são, muitas vezes, compartilhados ou comparados com diferentes setores internos ou até mesmo com outras organizações. Entretanto, alguns indicadores possuem associada certa vaguidade e imprecisão, carecendo-lhe também de semântica. Desta forma, a presente tese ocupou-se de apresentar um modelo de conhecimento baseado em ontologia e lógica difusa para representar semanticamente e de forma genérica os indicadores, tratando-se a imprecisão e vaguidade, além de contribuir incluindo a percepção da temporalidade e relacionamento entre indicadores. Utilizando-se a metodologia Design Science Research Methodology, o modelo foi considerado adequado, evidenciando, através da realização de entrevistas, a importância da representação da imprecisão, vaguidade, temporalidade e o relacionamento entre diferentes indicadores no contexto de avaliação de desempenho.Abstract: Indicators are widely used by organizations in order to assess, measure and classify organizational performance. Integral part of performance evaluation systems, indicators are often shared or compared among different domestic sectors or even other organizations. However, some indicators have associated certain vagueness and imprecision, as well as lack of semantics. Thus, this thesis held to present a model of knowledge based on ontology and fuzzy logic to represent in a semantic and generic way, with regard to the imprecision and vagueness, and contribute by including the perception of temporality and the relationship between indicators. Using the methodology Design Science Research Methodology, the model it was considered appropriate, showing, by conducting interviews, the importance of representation of imprecision, vagueness, temporality and the relationship between different indicators in the performance measurement context

    A Model for a Data Dictionary Supporting Multiple Definitions, Views and Contexts

    Get PDF
    Auf dem Gebiet der Klinischen Studien sind präzise Begriffsdefinitionen äußerst wichtig, um eine objektive Datenerfassung und -auswertung zu gewährleisten. Zudem ermöglichen sie externen Experten die Forschungsergebnisse korrekt zu interpretieren und anzuwenden. Allerdings weisen viele Klinische Studien Defizite in diesem Punkt auf: Definitionen sind oft ungenau oder werden implizit verwendet. Außerdem sind Begriffe oft uneinheitlich definiert, obwohl standardisierte Definitionen im Hinblick auf einen weitreichenderen Austausch von Ergebnissen wünschenswert sind. Vor diesem Hintergrund entstand die Idee des Data Dictionary, dessen Ziel zunächst darin besteht, die Definitionsalternativen von Begriffen zu sammeln und Klinischen Studien zur Verfügung zu stellen. Zusätzlich soll die Analyse der Definitionen in Bezug auf ihre Gemeinsamkeiten und Unterschiede sowie deren Harmonisierung unterstützt werden. Standardisierte Begriffsdefinitionen werden jedoch nicht erzwungen, da die Unterschiede in Definitionen inhaltlich gerechtfertigt sein können, z.B. aufgrund der Verwendung in unterschiedlichen Fachgebieten, durch studienspezifische Bedingungen oder verschiedene Expertensichten. In der vorliegenden Arbeit wird ein Modell für das Data Dictionary entwickelt. Das entwickelte Modell folgt dem aus der Terminologie bekannten konzept-basierten Ansatz und erweitert diesen um die Möglichkeit der Repräsentation alternativer Definitionen. Insbesondere wird hierbei angestrebt, die Unterschiede in den Definitionen möglichst genau zu explizieren, um zwischen inhaltlich verschiedenen Definitionsalternativen (z.B. sich wider-sprechenden Expertenmeinungen) und konsistenten Varianten einer inhaltlichen Definition (z.B. verschiedene Sichten, Übersetzungen in verschiedene Sprachen) unterscheiden zu können. Mehrere Modellelemente widmen sich zudem der Explizierung von kontextuellen Informationen (z.B. der Gültigkeit innerhalb von Organisationen oder der Domäne zu der ein Konzept gehört), um die Auswahl und Wiederverwendung von Definitionen zu unterstützen. Diese Informationen erlauben verschiedene Sichten auf die Inhalte des Data Dictionary. Sichten werden dabei als kohärente Teilmengen des Data Dictionary betrachtet, die nur diejenigen Inhalte umfassen, die als relevant im ausgewählten Kontext spezifiziert sind
    corecore