42 research outputs found
Inadmissible Class of Boolean Functions under Stuck-at Faults
Many underlying structural and functional factors that determine the fault
behavior of a combinational network, are not yet fully understood. In this
paper, we show that there exists a large class of Boolean functions, called
root functions, which can never appear as faulty response in irredundant
two-level circuits even when any arbitrary multiple stuck-at faults are
injected. Conversely, we show that any other Boolean function can appear as a
faulty response from an irredundant realization of some root function under
certain stuck-at faults. We characterize this new class of functions and show
that for n variables, their number is exactly equal to the number of
independent dominating sets (Harary and Livingston, Appl. Math. Lett., 1993) in
a Boolean n-cube. We report some bounds and enumerate the total number of root
functions up to 6 variables. Finally, we point out several open problems and
possible applications of root functions in logic design and testing
The Design of a Relational Engine
The key design challenges in the construction of a SAT-based relational engine are described, and novel techniques are proposed to address them. An efficient engine must have a mechanism for specifying partial solutions, an effective symmetry detection and breaking scheme, and an economical translation from relational to boolean logic. These desiderata are addressed with three new techniques: a symmetry detection algorithm that works in the presence of partial solutions, a sparse-matrix representation of relations, and a compact representation of boolean formulas inspired by boolean expression diagrams and reduced boolean circuits. The presented techniques have been implemented and evaluated, with promising results
Human error in the design of a safety-critical system
From the introduction:This thesis is an investigation into some o f the causes and possible remedies to the problem of human error in a complex human-machine system. The system in question is engaged in the design of computer software for the control of railway signalling infrastructure. Error in its operation has the potential to be lethally destructive, a fact that provides not only the systemâs epithet but also the primary motivation and significance for its investigation
Generation of Graph Classes with Efficient Isomorph Rejection
In this thesis, efficient isomorph-free generation of graph classes with the method of
generation by canonical construction path(GCCP) is discussed. The method GCCP
has been invented by McKay in the 1980s. It is a general method to recursively generate
combinatorial objects avoiding isomorphic copies. In the introduction chapter, the
method of GCCP is discussed and is compared to other well-known methods of generation.
The generation of the class of quartic graphs is used as an example to explain
this method. Quartic graphs are simple regular graphs of degree four. The programs,
we developed based on GCCP, generate quartic graphs with 18 vertices more than two
times as efficiently as the well-known software GENREG does.
This thesis also demonstrates how the class of principal graph pairs can be generated
exhaustively in an efficient way using the method of GCCP. The definition and
importance of principal graph pairs come from the theory of subfactors where each
subfactor can be modelled as a principal graph pair. The theory of subfactors has
applications in the theory of von Neumann algebras, operator algebras, quantum algebras
and Knot theory as well as in design of quantum computers. While it was
initially expected that the classification at index 3 + â5 would be very complicated,
using GCCP to exhaustively generate principal graph pairs was critical in completing
the classification of small index subfactors to index 5Œ.
The other set of classes of graphs considered in this thesis contains graphs without
a given set of cycles. For a given set of graphs, H, the TurĂĄn Number of H, ex(n,H),
is defined to be the maximum number of edges in a graph on n vertices without a
subgraph isomorphic to any graph in H. Denote by EX(n,H), the set of all extremal
graphs with respect to n and H, i.e., graphs with n vertices, ex(n,H) edges and no
subgraph isomorphic to any graph in H. We consider this problem when H is a set of
cycles. New results for ex(n, C) and EX(n, C) are introduced using a set of algorithms
based on the method of GCCP. Let K be an arbitrary subset of {C3, C4, C5, . . . , C32}.
For given n and a set of cycles, C, these algorithms can be used to calculate ex(n, C)
and extremal graphs in Ex(n, C) by recursively extending smaller graphs without any
cycle in C where C = K or C = {C3, C5, C7, . . .} ᎠK and nâ€64. These results are
considerably in excess of the previous results of the many researchers who worked on
similar problems. In the last chapter, a new class of canonical relabellings for graphs, hierarchical
canonical labelling, is introduced in which if the vertices of a graph, G, is canonically
labelled by {1, . . . , n}, then G\{n} is also canonically labelled. An efficient hierarchical
canonical labelling is presented and the application of this labelling in generation
of combinatorial objects is discussed
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Das bionisch-kybernetische Systemmodell VSM von S. Beer
Auf der einen Seite wird Netzwerken als neuartigere Organisationsform im aktuellen wirtschaftlichen Umfeld, das von immer gröĂerer KomplexitĂ€t geprĂ€gt ist, von vielen das Potenzial nachgesagt, dass sie effizienter, effektiver und schneller funktionieren können als andere Strukturmodelle und somit ein lĂ€ngeres Fortbestehen der Organisation ermöglichen.
Auf der anderen Seite gibt es die âBionikâ, einen vor allem aus der Technik kommenden Denkansatz, welcher die Natur als Vorbild nimmt. Die Organisationsbionik versucht nun, Systeme, welche selbst eine hohe KomplexitĂ€t besitzen und in einer Umgebung mit ebenfalls hoher KomplexitĂ€t agieren (z.B. Ăkosysteme oder Organismen), zu untersuchen und entsprechend verbesserte Strukturmodelle fĂŒr Organisationen zu erstellen. Das Viable System Model (VSM) nun, das Modell eines lebensfĂ€higen Systems, ein bionisch-kybernetisches Strukturmodell, wurde von Stafford Beer entwickelt, und soll als ideales Systemmodell als Vorbild fĂŒr komplexe Organisationen gelten.
In dieser Arbeit soll die Frage beantwortet werden, ob und wieweit das Netzwerk-Modell dem VSM als ideales Organisationsmodell nahe kommt. Oder anders formuliert, ob das Netzwerk-Modell ein âviableâ, also lebensfĂ€higes, Organisationskonzept ist und die VSM-Kriterien erfĂŒllt.
In den ersten drei Kapiteln wird das Viable System Models (VSM) kurz vorgestellt, weiters einige wichtige Punkte aus dem Umfeld der Bionik, der Kybernetik und der System-Theorie erklĂ€rt, sowie der Aufbau des Nervensystems, das Beer als Vorbild fĂŒr sein Modell verwendet, geschildert. Die darauf folgenden sieben Kapitel beschĂ€ftigen sich genauer mit dem VSM bzw. seinen fĂŒnf Teilsystemen. Es werden die dem Modell zugrunde liegenden Systemaspekte, die Organisation und das Management der fĂŒnf Teilsysteme sowie ihre Beziehungen untereinander und einige spezielle VSM-Charakteristika dargestellt.
Die nÀchsten vier Kapitel beschÀftigen sich mit dem Netzwerk-Modell, genauer gesagt mit einigen Netzwerk-Entstehungstheorien sowie Netzwerktypologien und besonderen Punkten in der Netzwerktheorie.
Die vorletzten zwei Kapitel widmen sich der eingangs erwĂ€hnten Frage, inwieweit das Netzwerk-Modell dem VSM (Viable System Model) als ideales Organisationsmodell gleicht und die Kriterien und Anforderungen fĂŒr ein lebensfĂ€higes Organisationskonzept erfĂŒllt. In diesem Punkt konnte zwar generell keine sehr hohe Ăbereinstimmung der zwei Modelle festgestellt werden, jedoch mit dem Vorbehalt, dass einige bestimmte Netzwerktypen unter gewissen Bedingungen sehr wohl die VSM-Kriterien als Idealmodell deutlich besser erfĂŒllen könnten.
Eine Kurzbiografie Stafford Beers schlieĂt die Arbeit ab
A constraint solver for software engineering : finding models and cores of large relational specifications
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 105-120).Relational logic is an attractive candidate for a software description language, because both the design and implementation of software often involve reasoning about relational structures: organizational hierarchies in the problem domain, architectural configurations in the high level design, or graphs and linked lists in low level code. Until recently, however, frameworks for solving relational constraints have had limited applicability. Designed to analyze small, hand-crafted models of software systems, current frameworks perform poorly on specifications that are large or that have partially known solutions. This thesis presents an efficient constraint solver for relational logic, with recent applications to design analysis, code checking, test-case generation, and declarative configuration. The solver provides analyses for both satisfiable and unsatisfiable specifications--a finite model finder for the former and a minimal unsatisfiable core extractor for the latter. It works by translating a relational problem to a boolean satisfiability problem; applying an off-the-shelf SAT solver to the resulting formula; and converting the SAT solver's output back to the relational domain. The idea of solving relational problems by reduction to SAT is not new. The core contributions of this work, instead, are new techniques for expanding the capacity and applicability of SAT-based engines. They include: a new interface to SAT that extends relational logic with a mechanism for specifying partial solutions; a new translation algorithm based on sparse matrices and auto-compacting circuits; a new symmetry detection technique that works in the presence of partial solutions; and a new core extraction algorithm that recycles inferences made at the boolean level to speed up core minimization at the specification level.by Emina Torlak.Ph.D
On the performance and programming of reversible molecular computers
If the 20th century was known for the computational revolution, what will the 21st be known for? Perhaps the recent strides in the nascent fields of molecular programming and biological computation will help bring about the âComing Era of Nanotechnologyâ promised in Drexlerâs âEngines of Creationâ. Though there is still far to go, there is much reason for optimism. This thesis examines the underlying principles needed to realise the computational aspects of such âenginesâ in a performant way. Its main body focusses on the ways in which thermodynamics constrains the operation and design of such systems, and it ends with the proposal of a model of computation appropriate for exploiting these constraints.
These thermodynamic constraints are approached from three different directions. The first considers the maximum possible aggregate performance of a system of computers of given volume, V, with a given supply of free energy. From this perspective, reversible computing is imperative in order to circumvent the Landauer limit. A result of Frank is refined and strengthened, showing that the adiabatic regime reversible computer performance is the best possible for any computerâquantum or classical. This therefore shows a universal scaling law governing the performance of compact computers of ~V^(5/6), compared to ~V^(2/3) for conventional computers. For the case of molecular computers, it is shown how to attain this bound. The second direction extends this performance analysis to the case where individual computational particles or sub-units can interact with one another. The third extends it to interactions with shared, non-computational parts of the system. It is found that accommodating these interactions in molecular computers imposes a performance penalty that undermines the earlier scaling result. Nonetheless, scaling superior to that of irreversible computers can be preserved, and appropriate mitigations and considerations are discussed. These analyses are framed in a context of molecular computation, but where possible more general computational systems are considered.
The proposed model, the Ś-calculus, is appropriate for programming reversible molecular computers taking into account these constraints. A variety of examples and mathematical analyses accompany it. Moreover, abstract sketches of potential molecular implementations are provided. Developing these into viable schemes suitable for experimental validation will be a focus of future work
Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected works), Vol. 2
This second volume dedicated to Dezert-Smarandache Theory (DSmT) in Information Fusion brings in new fusion quantitative rules (such as the PCR1-6, where PCR5 for two sources does the most mathematically exact redistribution of conïŹicting masses to the non-empty sets in the fusion literature), qualitative fusion rules, and the Belief Conditioning Rule (BCR) which is diïŹerent from the classical conditioning rule used by the fusion community working with the Mathematical Theory of Evidence.
Other fusion rules are constructed based on T-norm and T-conorm (hence using fuzzy logic and fuzzy set in information fusion), or more general fusion rules based on N-norm and N-conorm (hence using neutrosophic logic and neutrosophic set in information fusion), and an attempt to unify the fusion rules and fusion theories.
The known fusion rules are extended from the power set to the hyper-power set and comparison between rules are made on many examples.
One deïŹnes the degree of intersection of two sets, degree of union of two sets, and degree of inclusion of two sets which all help in improving the all existing fusion rules as well as the credibility, plausibility, and communality functions.
The book chapters are written by Frederic Dambreville, Milan Daniel, Jean Dezert, Pascal Djiknavorian, Dominic Grenier, Xinhan Huang, Pavlina Dimitrova Konstantinova, Xinde Li, Arnaud Martin, Christophe Osswald, Andrew Schumann, Tzvetan Atanasov Semerdjiev, Florentin Smarandache, Albena Tchamova, and Min Wang