419 research outputs found
Test Generation Based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
Fast Ultrahigh-Density Writing of Low Conductivity Patterns on Semiconducting Polymers
The exceptional interest in improving the limitations of data storage,
molecular electronics, and optoelectronics has promoted the development of an
ever increasing number of techniques used to pattern polymers at micro and
nanoscale. Most of them rely on Atomic Force Microscopy to thermally or
electrostatically induce mass transport, thereby creating topographic features.
Here we show that the mechanical interaction of the tip of the Atomic Force
Microscope with the surface of a class of conjugate polymers produces a local
increase of molecular disorder, inducing a localized lowering of the
semiconductor conductivity, not associated to detectable modifications in the
surface topography. This phenomenon allows for the swift production of low
conductivity patterns on the polymer surface at an unprecedented speed
exceeding 20 ; paths have a resolution in the order of the tip
size (20 nm) and are detected by a Conducting-Atomic Force Microscopy tip in
the conductivity maps.Comment: 22 pages, 6 figures, published in Nature Communications as Article (8
pages
A design methodology for compositional high-level synthesis of communication-centric SoCs
Systems-on-chip are increasingly designed at the system level by combining synthesizable IP components that operate concurrently while interacting through communication channels. CAD-tool vendors support this System-Level Design approach with high-level synthesis tools and libraries of interface primitives implementing the communication protocols. These interfaces absorb timing differences in the hardware-component implementations, thus enabling compositional design. However, they introduce also new challenges in terms of functional correctness and performance optimization. We propose a methodology that combines performance analysis and optimization algorithms to automatically address the issues that SoC designers may accidentally introduce when assembling components that are specified at the system level. Copyright 2014 ACM
A design methodology for compositional high-level synthesis of communication-centric SoCs.
ABSTRACT Systems-on-chip are increasingly designed at the system level by combining synthesizable IP components that operate concurrently while interacting through communication channels. CAD-tool vendors support this System-Level Design approach with high-level synthesis tools and libraries of interface primitives implementing the communication protocols. These interfaces absorb timing differences in the hardware-component implementations, thus enabling compositional design. However, they introduce also new challenges in terms of functional correctness and performance optimization. We propose a methodology that combines performance analysis and optimization algorithms to automatically address the issues that SoC designers may accidentally introduce when assembling components that are specified at the system level
Micro-scale {UHI} risk assessment on the heat-health nexus within cities by looking at socio-economic factors and built environment characteristics: The Turin case study (Italy)
Today the most substantial threats facing cities relate to the impacts of climate change. Extreme temperature such as heat waves and the occurrence of Urban Heat Island (UHI) phenomena, present the main challenges for urban planning and design. Climate deterioration exacerbates the already existing weaknesses in social systems, which have been created by changes such as population increases and urban sprawl. Despite numerous attempts by researchers to assess the risks associated with the heat-health nexus in urban areas, no common metrics have yet been defined yet. The objective of this study, therefore, is to provide an empirical example of a flexible and replicable methodology to estimate the micro-scale UHI risks within an urban context which takes into account all the relevant elements regarding the heat-health nexus. For this purpose, the city of Turin has been used as a case study. The methodological approach adopted is based on risk assessment guidelines suggested and approved by the most recent scientific literature. The risk framework presented here used a quantitative estimate per each census tract within the city based on the interaction of three main factors: hazard, exposure, and vulnerability. Corresponding georeferenced maps for each indicator have been provided to increase the local knowledge on the spatial distribution of vulnerability drivers. The proposed methodology and the related findings represent an initial stage of the urban risk investigation within the case study. This will include participatory processes with local policymakers and health-stakeholders with a view to guiding the local planning agenda of climate change adaptation and resilience strategies in the City of Turin
System-Level Optimization of Accelerator Local Memory for Heterogeneous Systems-on-Chip
In modern system-on-chip architectures, specialized accelerators are increasingly used to improve performance and energy efficiency. The growing complexity of these systems requires the use of system-level design methodologies featuring high-level synthesis (HLS) for generating these components efficiently. Existing HLS tools, however, have limited support for the system-level optimization of memory elements, which typically occupy most of the accelerator area. We present a complete methodology for designing the private local memories (PLMs) of multiple accelerators. Based on the memory requirements of each accelerator, our methodology automatically determines an area-efficient architecture for the PLMs to guarantee performance and reduce the memory cost based on technology-related information. We implemented a prototype tool, called Mnemosyne, that embodies our methodology within a commercial HLS flow. We designed 13 complex accelerators for selected applications from two recently-released benchmark suites (Perfect and CortexSuite). With our approach we are able to reduce the memory cost of single accelerators by up to 45%. Moreover, when reusing memory IPs across accelerators, we achieve area savings that range between 17% and 55% compared to the case where the PLMs are designed separately
Neural network accelerator for quantum control
Efficient quantum control is necessary for practical quantum computing
implementations with current technologies. Conventional algorithms for
determining optimal control parameters are computationally expensive, largely
excluding them from use outside of the simulation. Existing hardware solutions
structured as lookup tables are imprecise and costly. By designing a machine
learning model to approximate the results of traditional tools, a more
efficient method can be produced. Such a model can then be synthesized into a
hardware accelerator for use in quantum systems. In this study, we demonstrate
a machine learning algorithm for predicting optimal pulse parameters. This
algorithm is lightweight enough to fit on a low-resource FPGA and perform
inference with a latency of 175 ns and pipeline interval of 5 ns with 0.99
gate fidelity. In the long term, such an accelerator could be used near quantum
computing hardware where traditional computers cannot operate, enabling quantum
control at a reasonable cost at low latencies without incurring large data
bandwidths outside of the cryogenic environment.Comment: 7 pages, 10 figure
- …