75,746 research outputs found
From RT-LOTOS to Time Petri Nets new foundations for a verification platform
The formal description technique RT-LOTOS has been selected as intermediate language to add formality to a real-time UML profile named TURTLE. For this sake, an RT-LOTOS verification platform has been developed for early detection of design errors in real-time system models. The paper discusses an extension of the platform by inclusion of verification tools developed for Time Petri Nets. The starting point is the definition of RT-LOTOS to TPN translation patterns. In particular, we introduce the concept of components embedding Time Petri Nets. The translation patterns are implemented in a prototype tool which takes as input an RT-LOTOS specification and outputs a TPN in the format admitted by the TINA tool. The efficiency of the proposed solution has been demonstrated on various case studies
Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine
Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)
Characterizing Geo-located Tweets in Brazilian Megacities
This work presents a framework for collecting, processing and mining
geo-located tweets in order to extract meaningful and actionable knowledge in
the context of smart cities. We collected and characterized more than 9M tweets
from the two biggest cities in Brazil, Rio de Janeiro and S\~ao Paulo. We
performed topic modeling using the Latent Dirichlet Allocation model to produce
an unsupervised distribution of semantic topics over the stream of geo-located
tweets as well as a distribution of words over those topics. We manually
labeled and aggregated similar topics obtaining a total of 29 different topics
across both cities. Results showed similarities in the majority of topics for
both cities, reflecting similar interests and concerns among the population of
Rio de Janeiro and S\~ao Paulo. Nevertheless, some specific topics are more
predominant in one of the cities
Automating the IEEE std. 1500 compliance verification for embedded cores
The IEEE 1500 standard for embedded core testing proposes a very effective solution for testing modern system-on-chip (SoC). It proposes a flexible hardware test wrapper architecture, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Already several IP providers have announced compliance in both existing and future design blocks. In this paper we address the challenge of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE std. 1500. This is a mandatory step to fully trust the wrapper functionalities in applying the test sequences to the core. The proposed solution aims at implementing a verification framework allowing core providers and/or integrators to automatically verify the compliancy of their products (sold or purchased) to the standar
Characterizing Geo-located Tweets in Brazilian Megacities
This work presents a framework for collecting, processing and mining
geo-located tweets in order to extract meaningful and actionable knowledge in
the context of smart cities. We collected and characterized more than 9M tweets
from the two biggest cities in Brazil, Rio de Janeiro and S\~ao Paulo. We
performed topic modeling using the Latent Dirichlet Allocation model to produce
an unsupervised distribution of semantic topics over the stream of geo-located
tweets as well as a distribution of words over those topics. We manually
labeled and aggregated similar topics obtaining a total of 29 different topics
across both cities. Results showed similarities in the majority of topics for
both cities, reflecting similar interests and concerns among the population of
Rio de Janeiro and S\~ao Paulo. Nevertheless, some specific topics are more
predominant in one of the cities
Distilling Information Reliability and Source Trustworthiness from Digital Traces
Online knowledge repositories typically rely on their users or dedicated
editors to evaluate the reliability of their content. These evaluations can be
viewed as noisy measurements of both information reliability and information
source trustworthiness. Can we leverage these noisy evaluations, often biased,
to distill a robust, unbiased and interpretable measure of both notions?
In this paper, we argue that the temporal traces left by these noisy
evaluations give cues on the reliability of the information and the
trustworthiness of the sources. Then, we propose a temporal point process
modeling framework that links these temporal traces to robust, unbiased and
interpretable notions of information reliability and source trustworthiness.
Furthermore, we develop an efficient convex optimization procedure to learn the
parameters of the model from historical traces. Experiments on real-world data
gathered from Wikipedia and Stack Overflow show that our modeling framework
accurately predicts evaluation events, provides an interpretable measure of
information reliability and source trustworthiness, and yields interesting
insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17
A System for Deduction-based Formal Verification of Workflow-oriented Software Models
The work concerns formal verification of workflow-oriented software models
using deductive approach. The formal correctness of a model's behaviour is
considered. Manually building logical specifications, which are considered as a
set of temporal logic formulas, seems to be the significant obstacle for an
inexperienced user when applying the deductive approach. A system, and its
architecture, for the deduction-based verification of workflow-oriented models
is proposed. The process of inference is based on the semantic tableaux method
which has some advantages when compared to traditional deduction strategies.
The algorithm for an automatic generation of logical specifications is
proposed. The generation procedure is based on the predefined workflow patterns
for BPMN, which is a standard and dominant notation for the modeling of
business processes. The main idea for the approach is to consider patterns,
defined in terms of temporal logic,as a kind of (logical) primitives which
enable the transformation of models to temporal logic formulas constituting a
logical specification. Automation of the generation process is crucial for
bridging the gap between intuitiveness of the deductive reasoning and the
difficulty of its practical application in the case when logical specifications
are built manually. This approach has gone some way towards supporting,
hopefully enhancing our understanding of, the deduction-based formal
verification of workflow-oriented models.Comment: International Journal of Applied Mathematics and Computer Scienc
Making formal verification amenable to real-time UML practitioners
TTool, a real-time UML toolkit, offers user-friendly
interfaces to formal verification techniques such as reachability analysis, observer-based analysis and automatic generation of traceability matrices. Those techniques are surveyed in the paper
Verifying Recursive Active Documents with Positive Data Tree Rewriting
This paper proposes a data tree-rewriting framework for modeling evolving
documents. The framework is close to Guarded Active XML, a platform used for
handling XML repositories evolving through web services. We focus on automatic
verification of properties of evolving documents that can contain data from an
infinite domain. We establish the boundaries of decidability, and show that
verification of a {\em positive} fragment that can handle recursive service
calls is decidable. We also consider bounded model-checking in our data
tree-rewriting framework and show that it is \nexptime-complete
A Framework to Specify System Requirements using Natural interpretation of UML/MARTE diagrams
International audienceThe ever-increasing design complexity of embedded systems is constantly pressing the demand for more abstract design levels and possible methods for automatic verification and synthesis. Transforming a text-based user requirements document into semantically sound models is always difficult and error-prone as mostly these requirements are vague and improperly documented. This paper presents a framework to specify textual requirements graphically in standard modeling formalisms like uml and marte in the form of temporal and logical patterns. The underlying formal semantics of these graphical models allow to eliminate ambiguity in specifications and automatic design verification at different abstraction levels using these patterns. The semantics of these operators/patterns are presented formally as state automatons and a comparison is made to the existing ccsl relational operators. To reap the benefits of mde, a software plugin TemLoPAC is presented as part of the framework to transform the graphical patterns into ccsl and Verilog-based observers
- âŠ