1,064 research outputs found
Semantically valid integration of development processes and toolchains
As an indispensable component of today’s world economy and an increasing success factor in production and other processes, as well as products, software needs to handle a growing number of specific requirements and influencing factors that are driven by globalization. Two common success factors in the domain of Software Systems Engineering are standardized software development processes and process-supported toolchains. Development processes should be formally integrated with toolchains. The sequence and the results of toolchains must also be validated with the specifications of the development process on several levels. The outcome of a conceptual deductive analysis is that there is neither a formal general mapping nor a generally accepted validation mechanism for the challenges that such an integrated concept faces. To close this research gap, this paper focuses on the core issue of the integration of development processes and toolchains in order to create benefits for modeling and automatization in the domain of systems engineering. Therefore, it describes a self-developed integration approach related to the recently introduced prototypical technical implementation TOPWATER. A unified metamodel specifies how processes and toolchains are linked by a general mapping mechanism that considers test options for the structural, content, and semantic levels
Concept Mining: A Conceptual Understanding based Approach
Due to the daily rapid growth of the information, there are
considerable needs to extract and discover valuable knowledge from
data sources such as the World Wide Web. Most of the common
techniques in text mining are based on the statistical analysis of a
term either word or phrase. These techniques consider documents as
bags of words and pay no attention to the meanings of the document
content. In addition, statistical analysis of a term frequency
captures the importance of the term within a document only. However,
two terms can have the same frequency in their documents, but one
term contributes more to the meaning of its sentences than the other
term. Therefore, there is an intensive need for a model that
captures the meaning of linguistic utterances in a formal structure.
The underlying model should indicate terms that capture the
semantics of text. In this case, the model can capture terms that
present the concepts of the sentence, which leads to discover the
topic of the document.
A new concept-based model that analyzes terms on the sentence,
document and corpus levels rather than the traditional analysis of
document only is introduced. The concept-based model can effectively
discriminate between non-important terms with respect to sentence
semantics and terms which hold the concepts that represent the
sentence meaning.
The proposed model consists of concept-based statistical analyzer,
conceptual ontological graph representation, concept extractor and
concept-based similarity measure. The term which contributes to the
sentence semantics is assigned two different weights by the
concept-based statistical analyzer and the conceptual ontological
graph representation. These two weights are combined into a new
weight. The concepts that have maximum combined weights are selected
by the concept extractor. The similarity between documents is
calculated based on a new concept-based similarity measure. The
proposed similarity measure takes full advantage of using the
concept analysis measures on the sentence, document, and corpus
levels in calculating the similarity between documents.
Large sets of experiments using the proposed concept-based model on
different datasets in text clustering, categorization and retrieval
are conducted. The experiments demonstrate extensive comparison
between traditional weighting and the concept-based weighting
obtained by the concept-based model. Experimental results in text
clustering, categorization and retrieval demonstrate the substantial
enhancement of the quality using: (1) concept-based term frequency
(tf), (2) conceptual term frequency (ctf), (3) concept-based
statistical analyzer, (4) conceptual ontological graph, (5)
concept-based combined model.
In text clustering, the evaluation of results is relied on two
quality measures, the F-Measure and the Entropy. In text
categorization, the evaluation of results is relied on three quality
measures, the Micro-averaged F1, the Macro-averaged F1 and the Error
rate. In text retrieval, the evaluation of results relies on three
quality measures, the precision at 10 documents retrieved P(10), the
preference measure (bpref), and the mean uninterpolated average
precision (MAP). All of these quality measures are improved when the
newly developed concept-based model is used to enhance the quality
of the text clustering, categorization and retrieval
A model-driven approach to the conceptual modeling of situations : from specification to validation
A modelagem de situações para aplicações sensíveis ao contexto, também
chamadas de aplicações sensíveis a situações, é, por um lado, uma tarefa chave
para o funcionamento adequado dessas aplicações. Por outro lado, essa também é
uma tafera árdua graças à complexidade e à vasta gama de tipos de situações
possíveis. Com o intuito de facilitar a representação desses tipos de situações em
tempo de projeto, foi criada a Linguagem de Modelagem de Situações (Situation
Modeling Language - SML), a qual se baseia parcialmente em ricas teorias
ontológicas de modelagem conceitual, além de fornecer uma plataforma de detecção
de situação em tempo de execução. Apesar do benefício da existência dessa
infraestrutura, a tarefa de definir tipos de situação é ainda não-trivial, podendo
carregar problemas que dificilmente são detectados por modeladores via inspeções
manuais. Esta dissertação tem o propósito de melhorar e facilitar ainda mais a
definição de tipos de situação em SML propondo: (i) uma maior integração da
linguagem com as teorias ontológicas de modelagem conceitual pelo uso da
linguagem OntoUML, visando aumentar a expressividade dos modelos de situação;
e (ii) uma abordagem para validação de tipos de situação usando um método formal,
visando garantir que os modelos criados correspondam à intenção do modelador.
Tanto a integração quanto a validação são implementadas em uma ferramenta para
especificação, verificação e validação de tipos de situação ontologicamente
enriquecidos.The modeling of situation types for context-aware applications, also called situationaware
applications, is, on the one hand, a key task to the proper functioning of those
applications. On the other hand, it is also a hard task given the complexity and the
wide range of possible situation types. Aiming at facilitating the representation of
those types of situations at design-time, the Situation Modeling Language (SML) was
created. This language is based partially on rich ontological theories of conceptual
modeling and is accompanied by a platform for situation-detection at runtime.
Despite the benefits of the availability of this suitable infrastructure, the definition of
situation types, being a non-trivial task, can still pose problems that are hardly
detected by modelers by manual model inspection. This thesis aims at improving and
facilitating the definition of situation types in SML by proposing: (i) the integration
between the language and the ontological theories of conceptual modeling by using
the OntoUML language, with the purpose of increasing the expressivity of situation
type models; and (ii) an approach for the validation of situation type models using a
lightweight formal method, aiming at increasing the correspondence between the
created models’ instances and the modeler’s intentions. Both the integration and the
validation are implemented in a tool for specification, verification and validation of
ontologically-enriched situation types.CAPE
Using ontology and semantic web services to support modeling in systems biology
This thesis addresses the problem of collaboration among experimental biologists and modelers in the study of systems biology by using ontology and Semantic Web Services techniques. Modeling in systems biology is concerned with using experimental information and mathematical methods to build quantitative models across different biological scales. This requires interoperation among various knowledge sources and services. Ontology and Semantic Web Services potentially provide an infrastructure to meet this requirement.
In our study, we propose an ontology-centered framework within the Semantic Web infrastructure that aims at standardizing various areas of knowledge involved in the biological modeling processes. In this framework, first we specify an ontology-based meta-model for building biological models. This meta-model supports using shared biological ontologies to annotate biological entities in the models, allows semantic queries and automatic discoveries, enables easy model reuse and composition, and serves as a basis to embed external knowledge. We also develop means of transforming biological data sources and data analysis methods into Web Services. These Web Services can then be composed together to perform parameterization in biological modeling. The knowledge of decision-making and workflow of parameterization processes are then recorded by the semantic descriptions of these Web Services, and embedded in model instances built on our proposed meta-model.
We use three cases of biological modeling to evaluate our framework. By examining our ontology-centered framework in practice, we conclude that by using ontology to represent biological models and using Semantic Web Services to standardize knowledge components in modeling processes, greater capabilities of knowledge sharing, reuse and collaboration can be achieved. We also conclude that ontology-based biological models with formal semantics are essential to standardize knowledge in compliance with the Semantic Web vision
Clifford Algebra: A Case for Geometric and Ontological Unification
Robert Batterman’s ontological insights (2002, 2004, 2005) are apt: Nature abhors singularities. “So should we,” responds the physicist. However, the epistemic assessments of Batterman concerning the matter prove to be less clear, for in the same vein he write that singularities play an essential role in certain classes of physical theories referring to certain types of critical phenomena. I devise a procedure (“methodological fundamentalism”) which exhibits how singularities, at least in principle, may be avoided within the same classes of formalisms discussed by Batterman. I show that we need not accept some divergence between explanation and reduction (Batterman 2002), or between epistemological and ontological fundamentalism (Batterman 2004, 2005).
Though I remain sympathetic to the ‘principle of charity’ (Frisch (2005)), which appears to favor a pluralist outlook, I nevertheless call into question some of the forms such pluralist implications take in Robert Batterman’s conclusions. It is difficult to reconcile some of the pluralist assessments that he and some of his contemporaries advocate with what appears to be a countervailing trend in a burgeoning research tradition known as Clifford (or geometric) algebra.
In my critical chapters (2 and 3) I use some of the demonstrated formal unity of Clifford algebra to argue that Batterman (2002) equivocates a physical theory’s ontology with its purely mathematical content. Carefully distinguishing the two, and employing Clifford algebraic methods reveals a symmetry between reduction and explanation that Batterman overlooks. I refine this point by indicating that geometric algebraic methods are an active area of research in computational fluid dynamics, and applied in modeling the behavior of droplet-formation appear to instantiate a “methodologically fundamental” approach.
I argue in my introductory and concluding chapters that the model of inter-theoretic reduction and explanation offered by Fritz Rohrlich (1988, 1994) provides the best framework for accommodating the burgeoning pluralism in philosophical studies of physics, with the presumed claims of formal unification demonstrated by physicists choices of mathematical formalisms such as Clifford algebra. I show how Batterman’s insights can be reconstructed in Rohrlich’s framework, preserving Batterman’s important philosophical work, minus what I consider are his incorrect conclusions
A mathematical theory of semantic development in deep neural networks
An extensive body of empirical research has revealed remarkable regularities
in the acquisition, organization, deployment, and neural representation of
human semantic knowledge, thereby raising a fundamental conceptual question:
what are the theoretical principles governing the ability of neural networks to
acquire, organize, and deploy abstract knowledge by integrating across many
individual experiences? We address this question by mathematically analyzing
the nonlinear dynamics of learning in deep linear networks. We find exact
solutions to this learning dynamics that yield a conceptual explanation for the
prevalence of many disparate phenomena in semantic cognition, including the
hierarchical differentiation of concepts through rapid developmental
transitions, the ubiquity of semantic illusions between such transitions, the
emergence of item typicality and category coherence as factors controlling the
speed of semantic processing, changing patterns of inductive projection over
development, and the conservation of semantic similarity in neural
representations across species. Thus, surprisingly, our simple neural model
qualitatively recapitulates many diverse regularities underlying semantic
development, while providing analytic insight into how the statistical
structure of an environment can interact with nonlinear deep learning dynamics
to give rise to these regularities
Extraction of ontology schema components from financial news
In this thesis we describe an incremental multi-layer rule-based methodology for the extraction of ontology schema components from German financial newspaper text. By Extraction of Ontology Schema Components we mean the detection of new concepts and relations between these concepts for ontology building. The process of detecting concepts and relations between these concepts corresponds to the intensional part of an ontology and is often referred to as ontology learning. We present the process of rule generation for the extraction of ontology schema components as well as the application of the generated rules.In dieser Arbeit beschreiben wir eine inkrementelle mehrschichtige regelbasierte Methode für die Extraktion von Ontologiekomponenten aus einer deutschen Wirtschaftszeitung. Die Arbeit beschreibt sowohl den Generierungsprozess der Regeln für die Extraktion von ontologischem Wissen als auch die Anwendung dieser Regeln. Unter Extraktion von Ontologiekomponenten verstehen wir die Erkennung von neuen Konzepten und Beziehungen zwischen diesen Konzepten für die Erstellung von Ontologien. Der Prozess der Extraktion von Konzepten und Beziehungen zwischen diesen Konzepten entspricht dem intensionalen Teil einer Ontologie und wird im Englischen Ontology Learning genannt. Im Deutschen enspricht dies dem Lernen von Ontologien
- …