12,507 research outputs found
Recommended from our members
Silicon compilation
Silicon compilation is a term used for many different purposes. In this paper we define silicon compilation as a mapping from some higher level description into layout. We define the basic issues in structural and behavioral silicon compilation and some possible solutions to those issues. Finally, we define the concept of an intelligent silicon compiler in which the compiler evaluates the quality of the generated design and attempts to improve it if it is not satisfactory
The earlier the better: a theory of timed actor interfaces
Programming embedded and cyber-physical systems requires attention not only to functional behavior and correctness, but also to non-functional aspects and specifically timing and performance. A structured, compositional, model-based approach based on stepwise refinement and abstraction techniques can support the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. Toward this, we introduce a theory of timed actors whose notion of refinement is based on the principle of worst-case design that permeates the world of performance-critical systems. This is in contrast with the classical behavioral and functional refinements based on restricting sets of behaviors. Our refinement allows time-deterministic abstractions to be made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis. We show how our theory relates to, and can be used to reconcile existing time and performance models and their established theories
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
A framework for protein and membrane interactions
We introduce the BioBeta Framework, a meta-model for both protein-level and
membrane-level interactions of living cells. This formalism aims to provide a
formal setting where to encode, compare and merge models at different
abstraction levels; in particular, higher-level (e.g. membrane) activities can
be given a formal biological justification in terms of low-level (i.e.,
protein) interactions. A BioBeta specification provides a protein signature
together a set of protein reactions, in the spirit of the kappa-calculus.
Moreover, the specification describes when a protein configuration triggers one
of the only two membrane interaction allowed, that is "pinch" and "fuse". In
this paper we define the syntax and semantics of BioBeta, analyse its
properties, give it an interpretation as biobigraphical reactive systems, and
discuss its expressivity by comparing with kappa-calculus and modelling
significant examples. Notably, BioBeta has been designed after a bigraphical
metamodel for the same purposes. Hence, each instance of the calculus
corresponds to a bigraphical reactive system, and vice versa (almost).
Therefore, we can inherith the rich theory of bigraphs, such as the automatic
construction of labelled transition systems and behavioural congruences
You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems
Visual query systems (VQSs) empower users to interactively search for line
charts with desired visual patterns, typically specified using intuitive
sketch-based interfaces. Despite decades of past work on VQSs, these efforts
have not translated to adoption in practice, possibly because VQSs are largely
evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we
collaborated with experts from three diverse domains---astronomy, genetics, and
material science---via a year-long user-centered design process to develop a
VQS that supports their workflow and analytical needs, and evaluate how VQSs
can be used in practice. Our study results reveal that ad-hoc sketch-only
querying is not as commonly used as prior work suggests, since analysts are
often unable to precisely express their patterns of interest. In addition, we
characterize three essential sensemaking processes supported by our enhanced
VQS. We discover that participants employ all three processes, but in different
proportions, depending on the analytical needs in each domain. Our findings
suggest that all three sensemaking processes must be integrated in order to
make future VQSs useful for a wide range of analytical inquiries.Comment: Accepted for presentation at IEEE VAST 2019, to be held October 20-25
in Vancouver, Canada. Paper will also be published in a special issue of IEEE
Transactions on Visualization and Computer Graphics (TVCG) IEEE VIS
(InfoVis/VAST/SciVis) 2019 ACM 2012 CCS - Human-centered computing,
Visualization, Visualization design and evaluation method
The Self-Organization of Meaning and the Reflexive Communication of Information
Following a suggestion of Warren Weaver, we extend the Shannon model of
communication piecemeal into a complex systems model in which communication is
differentiated both vertically and horizontally. This model enables us to
bridge the divide between Niklas Luhmann's theory of the self-organization of
meaning in communications and empirical research using information theory.
First, we distinguish between communication relations and correlations among
patterns of relations. The correlations span a vector space in which relations
are positioned and can be provided with meaning. Second, positions provide
reflexive perspectives. Whereas the different meanings are integrated locally,
each instantiation opens global perspectives--"horizons of meaning"--along
eigenvectors of the communication matrix. These next-order codifications of
meaning can be expected to generate redundancies when interacting in
instantiations. Increases in redundancy indicate new options and can be
measured as local reduction of prevailing uncertainty (in bits). The systemic
generation of new options can be considered as a hallmark of the
knowledge-based economy.Comment: accepted for publication in Social Science Information, March 21,
201
- …