15 research outputs found
Test Generation Based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
Educação e saúde bucal do trabalhador: uma iniciativa extensionista
Objetivo: Relatar uma ação de cunho educativo e preventivo de saúde bucal com trabalhadores da construção civil no município de Ponta Grossa/PR, realizada no próprio local de trabalho com 27 trabalhadores. Relato: A ação foi efetivada pelo projeto extensionista ‘Nós na Rede’, voltado para as práticas educativas em saúde bucal, protagonizada por acadêmicos do curso de Odontologia, com duração de quatro horas e utilizou-se de diferentes métodos e recursos educativos como: banners com ilustrações explicativas, macromodelos bucais e instrumentos de higienização bucal para orientação, roda de conversa sobre as práticas de saúde no ambiente laboral e momento de tira-dúvidas, houve a entrega e explicação baseada na cartilha “Você sabia? 10 curiosidades sobre saúde bucal” e panfleto “Saúde bucal do trabalhador”. Ao final, realizou-se avaliação da condição de saúde bucal e aferição da pressão arterial. Conclusões: Evidenciou-se a carência de informação dos trabalhadores sobre aspectos preventivos e curativos no âmbito da saúde bucal, e a relevância da ampliação de cenários de atuação extensionistas no âmbito da saúde bucal do trabalhador e, assim formando acadêmicos mais críticos e sensíveis as necessidades em saúde da classe trabalhadora, ao mesmo tempo que atua e beneficia a prevenção e educação em saúde para essa população
Educação e saúde bucal do trabalhador: uma iniciativa extensionista
Objetivo: Relatar uma ação de cunho educativo e preventivo de saúde bucal com trabalhadores da construção civil no município de Ponta Grossa/PR, realizada no próprio local de trabalho com 27 trabalhadores. Relato: A ação foi efetivada pelo projeto extensionista ‘Nós na Rede’, voltado para as práticas educativas em saúde bucal, protagonizada por acadêmicos do curso de Odontologia, com duração de quatro horas e utilizou-se de diferentes métodos e recursos educativos como: banners com ilustrações explicativas, macromodelos bucais e instrumentos de higienização bucal para orientação, roda de conversa sobre as práticas de saúde no ambiente laboral e momento de tira-dúvidas, houve a entrega e explicação baseada na cartilha “Você sabia? 10 curiosidades sobre saúde bucal” e panfleto “Saúde bucal do trabalhador”. Ao final, realizou-se avaliação da condição de saúde bucal e aferição da pressão arterial. Conclusões: Evidenciou-se a carência de informação dos trabalhadores sobre aspectos preventivos e curativos no âmbito da saúde bucal, e a relevância da ampliação de cenários de atuação extensionistas no âmbito da saúde bucal do trabalhador e, assim formando acadêmicos mais críticos e sensíveis as necessidades em saúde da classe trabalhadora, ao mesmo tempo que atua e beneficia a prevenção e educação em saúde para essa população
A functional ATPG as a bridge between functional verification and testing
Il continuo aumento della complessit\ub5a dei sistemi embedded, unito all'esigenza di
tempi di immissione sui mercati sempre pi\ub5u ristretti, rende la verifica una delle fasi
pi\ub5u delicate e costose della produzione di designs. Una fase di verifica \ue8 necessaria
dopo ogni passo del flusso di progettazione di un design per evitare la propagazione
di errori tra diversi livelli di astrazione. La prima fase importante \ub5e la validazione
funzionale del design, al fine di individuare la presenza di errori di progettazione. In
seguito, si vuole verificare che il design prodotto corrisponda ai requisiti richiesti. Si
deve pertanto effettuare una fare di test a basso livello, per identificare la presenza
di difetti di produzione sui componenti hardware.
L'attivit\ue0 di ricerca \ue8 indirizzata a definire una metodologia di generazione del
test di produzione di un dispositivo che riusasse al meglio tutti i piani di test
sviluppati durante il progetto funzionale dello stesso. L'idea centrale e innovativa
della tesi \ue8 stata che sia possibile condividere le tecniche, e gran parte dei risultati, tra le fasi di validazione funzionale di un dispositivo, necessarie a verificare
la correttezza funzionale, e il test di produzione, necessario a verificare la corretta realizzazione su silicio. Tutto questo ha prodotto un generatore automatico
di test (Automatic Test Pattern Generator - ATPG) che pur lavorando a livello
funzionale pu\uf2 orientare i propri risultati verso il test di produzione anticipando
l'identificazione di eventuali parti del progetto che saranno difficilmente testabili
e semplificando la fase effettiva di generazione del test.
L'ATPG lavora su descrizioni funzionali del design ed \ub5e progettato per generare
sequenze in grado di identificare guasti ad alto livello. Queste sequenze possono
poi essere riapplicate a livello gate, poich\ue9 la loro generazione \ue8 orientata ad un
modello di guasto ad alto livello, bit coverage, strettamente correlato al modello
di guasto a livello gate, stuck-at. In questo contesto parte del lavoro di testi \ue8
stato dedicato alla modellazione di guasti e alla definizione di una metodologia
per mappare guasti ad alto livello su guasti a livello gate.
La definizione di una metodologia per quantificare la bont\ue0 delle sequenze
generate \ub5e stato il primo passo, data la necessit\ub5a di avere uno strumento adeguato
per misurare l'e\ub1cacia delle sequenze di test.
In questa fase iniziale sono state investigate le relazioni che esistono tra modelli
di errore funzionale e modelli di guasto. L'obiettivo \ub5e stato identi\uafcare in maniera
formale quali guasti di produzione possano essere messi in relazione con errori di
progetto e come sia possibile utilizzare un test funzionale per rilevare errori di
progetto e contemporaneamente identi\uafcare eventuali guasti di produzione. Sono
state identi\uafcate relazioni certe e relazioni approssimate dipendenti dal modello di
errore funzionale. Questo ha permesso di focalizzare il modello ai soli guasti che
hanno un equivalente a livello di difetto \uafsico e quindi di guasto di produzione.
In particolare \ub5e stata de\uafnita una metodologia per rimuovere guasti ridondanti
dalla lista dei guasti ad alto livello che vengono iniettati nella descrizione di un
design. Data una lista dei guasti ottimizzata, si \ub5e quindi de\uafnito una strategia per
mappare i guasti ad alto livello sui guasti a livello gate, sui quali sono state poi
applicate le sequenze di test generate ad alto livello.
In una seconda fase, sono stati analizzati quali modelli computazionali potessero
essere utili per la generazione del test a livello funzionale, e l'attenzione si \ub5e posata
sulle macchine a stati \uafniti estese (EFSM) che rappresentano un modello applica-
bile in maniera e\ub1cace a qualsiasi descrizione di dispositivo digitale automatica-
mente sintetizzabile. \ub5E stata de\uafnita una metodologia per estrarre dalla descrizione
funzionale del design un particolare tipo di EFSM ed \ub5e stata quindi elaborata una
tecnica di manipolazione delle EFSM orientata alla facilit di attraversamento per
la generazione di sequenze di test che ha permesso di progettare un ATPG che
identi\uafca sequenze di test per la maggior parte degli errori funzionali modellati.
L'ATPG sfrutta tecniche di tecniche di learning, backjumping e constraing logic
programmin (CLP) che sono state sviluppate per navigare in modo uniforme la lo
spazio degli stati della EFSM e testare guasti di\ub1cilmente testabili.
Per innalzare al massimo la copertura di questi errori sono state poi studiate
tecniche formali basate su SAT e CLP. In particolare \ub5e stata de\uafnita una metodolo-
gia per generare sequenze di disinguibilit\ub5a per le EFSM. Questo studio ha portato
allo sviluppo di un motore basato su CLP per la generazione del test orientato
a guasti. Questo lavoro presenta un innovativo metodo per modellare le EFSM
in CLP e strategie di ottimizzazione per gestire la complessit\ub5a tipica di qeuste
tecniche formali.Embedded systems technology powers many of today's innovations and products.
Rapid technological advancement fuels the increasing chip complexity, which in
turn enables the latest round of products. Embedded systems touch many aspects
of everyday life, from the pocket-sized cell phone and digital camera to the high-
end server that searches an online database, veri\uafes credit card information, and
sends the order to the warehouse for immediate delivery. Also, expectations for
these chips grow at an equal rate, despite the additional complexity. For example,
critical applications like chips that monitor the safety processes in cars require
that the chip not fail during normal use of the vehicle. Also, it is not acceptable
for a user to be denied access to an online brokerage account because the server
is down.
Then, an enormous amount of engineering goes into each of these chips, whether
it is a microprocessor, memory device or entire system on a chip. All types of
chips have a set of challenges that engineers must solve for the \uafnal product to be
successful in the marketplace.
The veri\uafcation \ub0ow has become a bottleneck in the development of today's
digital systems. Chip complexity and market competitiveness has increased to the
point that design teams are required to spend approximately 70% of their e\uaeorts
\uafnding the bugs that lurk in their designs. In particular, two main phases of the
veri\uafcation \ub0ow are testing [1] and functional veri\uafcation [2]. The latter aims to
ensure that the design satis\uafes its speci\uafcation before manufacturing, by detecting
and removing design errors. Testing focuses on the detection of production defects
quickly as chips come o\uae of the manufacturing line. Even though testing and
functional veri\uafcation are often grouped together, the two disciplines have little
in common. A chip that successfully runs through testing may still add one to one
and get a result of three if the design had poor functional veri\uafcation. Testing only
con\uafrms that the manufactured chip is equivalent to the circuit design speci\uafed to
the manufacturing process. It makes no statement about the logical functionality
of the chip itself.
However, the design teams dealing with the veri\uafcation of a system have to
handle three constraints: scheduling, costs and quality.
Because digital systems success depends heavily on hitting the marketplace at
the right time, scheduling has become an imperative point. The use of automatic
tools reduces both the veri\uafcation time and the probability of committing errors.
A valid solution is represented by dynamic veri\uafcation, which exploits simulation
based techniques and automatic test pattern generators (ATPGs) to generate the
required test sequences.
Customers expect delivered products to meet a standard of quality. This is
especially true of critical applications. Furthermore, if the marketplace perceives
that a product is of poor quality, it can have a devastating e\uaeect on the company.
Another critical constraint is cost. Cost drastically in\ub0uences the di\uaeerent ver-
i\uafcation phases. The cost of undetected bugs grow exponentially over time. If a
bug is detected early during veri\uafcation, it is less expensive to \uafx it. The designer
needs only to rework the high-level design description, and the veri\uafcation time
shows that the update \uafxed the ordinal problem. A bug found in a system test,
however, may cost hundreds of thousands of dollars: hardware must be refabri-
cated and there is additional time-to-market costs. Finally, one of the most costly
types of bugs is one in which the customer discovers a bug. This not only invokes
warranty replacement costs but may also tarnish the image of the company or
brand of products .
Functional veri\uafcation is the biggest lever that a\uaeects all three constraints.
A chip can be produced early if the veri\uafcation team is able to remove design
errors e\ub1ciently. The cost of re-fabricating a chip multiple times can drive the
development expenses to an unacceptable level and negatively a\uaeect the product
schedule. Functional veri\uafcation reduces the number of re-spins and removes latent
problems, avoiding also quality problems of the developed products.
Another advantage of functional veri\uafcation is that designers are able to work
at a higher abstraction level, and the design descriptions are more tractable than
gate-level ones. On the other hand, e\ub1cient logic-level ATPGs are available for
digital systems and are already state of the art, while high-level functional ATPGs
are still in a prototyping phase.
This thesis is intended to de\uafne a methodology that exploits the positive as-
pects of both functional veri\uafcation and logic-level testing, while also providing
bene\uafts of testing at both of these two veri\uafcation levels
Teaching embedded software design with radSUITE
This paper describes the radSUITE starter kit for embedded SW design that has been developed by three small-medium enterprises (SME) working on development of embedded applications under the scientific guidance of University of Verona. Even if radSUITE has been primarily developed for commercial purposes, its features revealed to be partic- ularly suited to be adopted for educational purposes inside university courses as well as industrial training sessions re- lated to modelling and verification of embedded SW
Test generation based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
Model-Driven Design and Validation of Embedded Software
This paper presents a model-based framework for designing and validating embedded software (ESW). The design infrastructure is a rapid-application-development suite for ESW, i.e., radCASE, which provides the user with an off the shelf designing environment based on model-driven paradigm. The validation infrastructure, i.e., radCHECK, is based on Property Editor. Such an editor simplifies the definition of PSL properties by exploiting PSL-based templates, that can be automatically compiled into executable checkers by using the integrated Checker Generator engine. Besides, rad-CHECK comprises a testcase generation infrastructure, i.e., Ulisse, which is based on an corner-case oriented concolic approach for ESW, thus it is able to simulate the ESW and the checkers by using high-coverage testcases