76 research outputs found
Quality of process modeling using BPMN: a model-driven approach
Dissertação para obtenção do Grau de Doutor em
Engenharia InformáticaContext: The BPMN 2.0 specification contains the rules regarding the correct usage of
the language’s constructs. Practitioners have also proposed best-practices for producing better BPMN models. However, those rules are expressed in natural language, yielding sometimes ambiguous interpretation, and therefore, flaws in produced BPMN models.
Objective: Ensuring the correctness of BPMN models is critical for the automation of
processes. Hence, errors in the BPMN models specification should be detected and
corrected at design time, since faults detected at latter stages of processes’ development can be more costly and hard to correct. So, we need to assess the quality of BPMN models in a rigorous and systematic way.
Method: We follow a model-driven approach for formalization and empirical validation
of BPMN well-formedness rules and BPMN measures for enhancing the quality of
BPMN models.
Results: The rule mining of BPMN specification, as well as recently published BPMN works, allowed the gathering of more than a hundred of BPMN well-formedness and
best-practices rules. Furthermore, we derived a set of BPMN measures aiming to provide information to process modelers regarding the correctness of BPMN models. Both BPMN rules, as well as BPMN measures were empirically validated through samples of
BPMN models.
Limitations: This work does not cover control-flow formal properties in BPMN models, since they were extensively discussed in other process modeling research works.
Conclusion: We intend to contribute for improving BPMN modeling tools, through the
formalization of well-formedness rules and BPMN measures to be incorporated in those
tools, in order to enhance the quality of process modeling outcomes
Matching Weak Informative Ontologies
Most existing ontology matching methods utilize the literal information to
discover alignments. However, some literal information in ontologies may be
opaque and some ontologies may not have sufficient literal information. In this
paper, these ontologies are named as weak informative ontologies (WIOs) and it
is challenging for existing methods to matching WIOs. On one hand, string-based
and linguistic-based matching methods cannot work well for WIOs. On the other
hand, some matching methods use external resources to improve their
performance, but collecting and processing external resources is still
time-consuming. To address this issue, this paper proposes a practical method
for matching WIOs by employing the ontology structure information to discover
alignments. First, the semantic subgraphs are extracted from the ontology graph
to capture the precise meanings of ontology elements. Then, a new similarity
propagation model is designed for matching WIOs. Meanwhile, in order to avoid
meaningless propagation, the similarity propagation is constrained by semantic
subgraphs and other conditions. Consequently, the similarity propagation model
ensures a balance between efficiency and quality during matching. Finally, the
similarity propagation model uses a few credible alignments as seeds to find
more alignments, and some useful strategies are adopted to improve the
performance. This matching method for WIOs has been implemented in the ontology
matching system Lily. Experimental results on public OAEI benchmark datasets
demonstrate that Lily significantly outperforms most of the state-of-the-art
works in both WIO matching tasks and general ontology matching tasks. In
particular, Lily increases the recall by a large margin, while it still obtains
high precision of matching results
Using Natural Language Processing with Deep Learning to Explore Clinical Notes
In recent years, the deep learning community and technology have grown substantially, both in terms of research and applications. However, some application areas have lagged behind. The medical domain is an example of a field with a lot of untapped potential, partly caused by complex issues related to privacy and ethics. Still, deep learning is a very powerful tool to utilize structured and unstructured data, and could help save lives. In this thesis, we use natural language processing to interpret clinical notes and predict the mortality rate of subjects. We explore if language models trained on a specific domain would become more performant, and we compared them to language models trained on an intermediate data set. We found that our language model trained on an intermediate data set that had some resemblance to our target data set performed slightly better than its counterpart language model. We found that text classifiers built on top of the language models were capable of correctly predicting if a subject would die or not. Furthermore, we extracted the free-text features from the text classifiers and combined them, using stacking, with heterogeneous data as an attempt to increase the efficacy of the classifiers and to explore the relative performance boost gained by including free-text features. We found a correlation between the quality of text classifiers that produced the text features and the stacking classifiers' performances. The classifier that was trained on a data set without text features performed the worst, and the classifier trained on a data set with the best text features performed the best. We also discuss the central concerns that come with applying deep learning in a medical domain with regards to privacy and ethics. It is our intention that this thesis serves as a contribution to the advancement of deep learning within the medical domain, and as a testament as to what can be achieved with today's technology.Masteroppgave i Programutvikling samarbeid med HVLPROG399MAMN-PRO
Setting B2B digital marketing in artificial intelligence-based CRMs: A review and directions for future research
[EN] The new business challenges in the B2B sector are determined by connected ecosystems, where data-driven decision making is crucial for successful strategies. At the same time, the use of digital marketing as a communication and sales channel has led to the need and use of Customer Relationship Management (CRM) systems to correctly manage company information. The understanding of B2B traditional Marketing strategies that use CRMs that work with Artificial Intelligence (AI) has been studied, however, research focused on the understanding and application of these technologies in B2B digital marketing is scarce. To cover this gap in the literature, this study develops a literature review on the main academic contributions in this area. To visualize the outcomes of the literature review, the results are then analyzed using a statistical approach known as Multiple Correspondence Analysis (MCA) under the homogeneity analysis of variance by means of alternating least squares (HOMALS) framework programmed in the R language. The research results classify the types of CRMs and their typologies and explore the main techniques and uses of AI-based CRMs in B2B digital marketing. In addition, a discussion, directions and propositions for future research are presented.In gratitude to the Ministry of Science, Innovation and Universities and the European Regional Development Fund: RTI2018-096295-BC22.Saura, JR.; Ribeiro-Soriano, D.; Palacios Marqués, D. (2021). Setting B2B digital marketing in artificial intelligence-based CRMs: A review and directions for future research. Industrial Marketing Management. 98:161-178. https://doi.org/10.1016/j.indmarman.2021.08.006S1611789
Recommended from our members
Mining Scholarly Publications for Research Evaluation
Scientific research can lead to breakthroughs that revolutionise society by solving long-standing problems. However, investment of public funds into research requires the ability to clearly demonstrate beneficial returns, accountability, and good management. At the same time, with the amount of scholarly literature rapidly expanding, recognising key research that presents the most important contributions to science is becoming increasingly difficult and time-consuming. This creates a need for effective and appropriate research evaluation methods. However, the question of how to evaluate the quality of research outcomes is very difficult to answer and despite decades of research, there is still no standard solution to this problem.
Given this growing need for research evaluation, it is increasingly important to understand how research should be evaluated, and whether the existing methods meet this need. However, the current solutions, which are predominantly based on counting the number of interactions in the scholarly communication network, are insufficient for a number of reasons. In particular, they struggle in capturing many aspects of the academic culture and often significantly lag behind current developments.
This work focuses on the evaluation of research publications and aims at creating new methods which utilise publication content. It studies the concept of research publication quality, methods assessing the performance of new research publication evaluation methods, analyses and extends the existing methods, and, most importantly, presents a new class of metrics which are based on publication manuscripts. By bridging the fields of research evaluation and text- and data-mining, this work provides tools for analysing the outcomes of research, and for relieving information overload in scholarly publishing
Investigating the universality of a semantic web-upper ontology in the context of the African languages
Ontologies are foundational to, and upper ontologies provide semantic integration across, the Semantic Web. Multilingualism has been shown to be a key challenge to the development of the Semantic Web, and is a particular challenge to the universality requirement of upper ontologies. Universality implies a qualitative mapping from lexical ontologies, like WordNet, to an upper ontology, such as SUMO. Are a given natural language family's core concepts currently included
in an existing, accepted upper ontology? Does SUMO preserve an ontological non-bias with respect to the multilingual challenge, particularly in the context of the African languages? The approach to developing WordNets mapped to shared core concepts in the non-Indo-European language families has highlighted these challenges and this is examined in a unique new context: the Southern African
languages. This is achieved through a new mapping from African language core concepts to SUMO. It is shown that SUMO has no signi ficant natural language ontology bias.ComputingM. Sc. (Computer Science
A Networked Perspective on the Engineering Design Process: At the Intersection of Process and Organisation Architectures
The design process of engineering systems frequently involves hundreds of activities and people over long periods of time and is implemented through complex networks of information exchanges. Such socio-technical complexity makes design processes hard to manage, and as a result, engineering design projects often fail to be on time, on budget, and meeting specifications. Despite the wealth of process models available, previous approaches have been insufficient to provide a networked perspective that allows the challenging combination of organisational and process complexity to unfold. The lack of a networked perspective also has limited the study of the relationships between process complexity and process performance. This thesis argues that to understand and improve design processes, we must look beyond the planned process and unfold the network structure and composition that actually implement the process. This combination of process structure—how people and activities are connected—and composition—the functional diversity of the groups participating in the process—is referred to as the actual design process architecture.
This thesis reports on research undertaken to develop, apply and test a framework that characterises the actual design process architecture of engineering systems as a networked process. Research described in this thesis involved literature reviews in Engineering Design, Engineering Systems, Complexity and applied Network Science, and two case studies at engineering design companies with the objective of iteratively developing the framework and providing a proof-of- concept of its use in a large engineering design project.
The developed Networked Process (NPr) Framework is composed of a conceptual model of the actual design process architecture, and an analytical method that allows the model and data- driven support to be quantified. The framework provides a networked perspective on three fundamental levels of analysis: 1) the activity-level, characterised as a network of people performing each activity, 2) the interface-level, characterised as a network of people interfacing between two interdependent activities, and 3) the whole process-level, characterised as a dynamic network of people and activities. The aim of the framework is to improve the design process of engineering systems through a more detailed overview of the actual design process, to support data-driven reflection of the relationship between process architecture and performance, and to provide the means to compare process plans against the actual process. The framework is based on a multi-domain network approach to process architecture and draws on previous research using matrix-based and graph-based process models.
The results of the NPr Framework’s application in two case studies showed that decision makers in engineering design projects were able to gain new insights into their complex design processes through the framework. Such insights allowed them to better support and manage design activities, process interfaces and the whole design process. The framework also was used to enrich project debriefing and lessons-learned sessions, to spot process anomalies, to improve design process planning, to examine process progress, and to identify relationships between process architecture and performance. Contributions to knowledge include: First, the development of a more complete model of the actual process architecture and concrete analytical methods to quantify the developed model. Second, the identification of key structural and compositional variables as well as tests to identify the relationship between those variables and performance metrics. Third, the creation of a platform for further research on the relationships between actual design process architecture, behaviour and performance
Data Service Outsourcing and Privacy Protection in Mobile Internet
Mobile Internet data have the characteristics of large scale, variety of patterns, and complex association. On the one hand, it needs efficient data processing model to provide support for data services, and on the other hand, it needs certain computing resources to provide data security services. Due to the limited resources of mobile terminals, it is impossible to complete large-scale data computation and storage. However, outsourcing to third parties may cause some risks in user privacy protection. This monography focuses on key technologies of data service outsourcing and privacy protection, including the existing methods of data analysis and processing, the fine-grained data access control through effective user privacy protection mechanism, and the data sharing in the mobile Internet
Computational Methods for Medical and Cyber Security
Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
- …