9,584 research outputs found
Knowledge discovery through creating formal contexts
Knowledge discovery is important for systems
that have computational intelligence in helping them learn
and adapt to changing environments. By representing, in
a formal way, the context in which an intelligent system
operates, it is possible to discover knowledge through an
emerging data technology called Formal Concept Analysis
(FCA). This paper describes a tool called FcaBedrock that
converts data into Formal Contexts for FCA. The paper
describes how, through a process of guided automation,
data preparation techniques such as attribute exclusion and
value restriction allow data to be interpreted to meet the requirements
of the analysis. Creating Formal Contexts using
FcaBedrock is shown to be straightforward and versatile.
Large data sets are easily converted into a standard FCA
format
Generic PLM system for SMEs: Application to an equipment manufacturer
For several years, digital engineering has increasingly taken a more important place in the strategic issues of mechanical engineering companies. Our proposition is an approach that enables technical data to be managed and used throughout the product life-cycle. This approach aims to provide assistance for costing, development and industrialization of the product, and for the capitalization, the reuse and the extension of fundamental knowledge. This approach has been experimented within several companies. This paper presents the case in a company environment that designs and produces families of ship equipment parts
Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design
The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface
StackInsights: Cognitive Learning for Hybrid Cloud Readiness
Hybrid cloud is an integrated cloud computing environment utilizing a mix of
public cloud, private cloud, and on-premise traditional IT infrastructures.
Workload awareness, defined as a detailed full range understanding of each
individual workload, is essential in implementing the hybrid cloud. While it is
critical to perform an accurate analysis to determine which workloads are
appropriate for on-premise deployment versus which workloads can be migrated to
a cloud off-premise, the assessment is mainly performed by rule or policy based
approaches. In this paper, we introduce StackInsights, a novel cognitive system
to automatically analyze and predict the cloud readiness of workloads for an
enterprise. Our system harnesses the critical metrics across the entire stack:
1) infrastructure metrics, 2) data relevance metrics, and 3) application
taxonomy, to identify workloads that have characteristics of a) low sensitivity
with respect to business security, criticality and compliance, and b) low
response time requirements and access patterns. Since the capture of the data
relevance metrics involves an intrusive and in-depth scanning of the content of
storage objects, a machine learning model is applied to perform the business
relevance classification by learning from the meta level metrics harnessed
across stack. In contrast to traditional methods, StackInsights significantly
reduces the total time for hybrid cloud readiness assessment by orders of
magnitude
Towards an ontology for process monitoring and mining
Business Process Analysis (BPA) aims at monitoring, diagnosing, simulating and mining enacted processes in order to support the analysis and enhancement of process models. An effective BPA solution must provide the means for analysing existing e-businesses at three levels of abstraction: the Business Level, the Process Level and the IT Level. BPA requires semantic information that spans these layers of abstraction and which should be easily retrieved from audit trails. To cater for this, we describe the Process Mining Ontology and the Events Ontology which aim to support the analysis of enacted processes at different levels of abstraction spanning from fine grain technical details to coarse grain aspects at the Business Level
UML-F: A Modeling Language for Object-Oriented Frameworks
The paper presents the essential features of a new member of the UML language
family that supports working with object-oriented frameworks. This UML
extension, called UML-F, allows the explicit representation of framework
variation points. The paper discusses some of the relevant aspects of UML-F,
which is based on standard UML extension mechanisms. A case study shows how it
can be used to assist framework development. A discussion of additional tools
for automating framework implementation and instantiation rounds out the paper.Comment: 22 pages, 10 figure
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
Theory and Practice of Data Citation
Citations are the cornerstone of knowledge propagation and the primary means
of assessing the quality of research, as well as directing investments in
science. Science is increasingly becoming "data-intensive", where large volumes
of data are collected and analyzed to discover complex patterns through
simulations and experiments, and most scientific reference works have been
replaced by online curated datasets. Yet, given a dataset, there is no
quantitative, consistent and established way of knowing how it has been used
over time, who contributed to its curation, what results have been yielded or
what value it has.
The development of a theory and practice of data citation is fundamental for
considering data as first-class research objects with the same relevance and
centrality of traditional scientific products. Many works in recent years have
discussed data citation from different viewpoints: illustrating why data
citation is needed, defining the principles and outlining recommendations for
data citation systems, and providing computational methods for addressing
specific issues of data citation.
The current panorama is many-faceted and an overall view that brings together
diverse aspects of this topic is still missing. Therefore, this paper aims to
describe the lay of the land for data citation, both from the theoretical (the
why and what) and the practical (the how) angle.Comment: 24 pages, 2 tables, pre-print accepted in Journal of the Association
for Information Science and Technology (JASIST), 201
- âŠ