54 research outputs found

    The Complexity of Computing Optimal Assignments of Generalized Propositional Formulae

    Full text link
    We consider the problems of finding the lexicographically minimal (or maximal) satisfying assignment of propositional formulae for different restricted formula classes. It turns out that for each class from our framework, the above problem is either polynomial time solvable or complete for OptP. We also consider the problem of deciding if in the optimal assignment the largest variable gets value 1. We show that this problem is either in P or P^NP complete.Comment: 17 pages, 1 figur

    Complexity classifications for different equivalence and audit problems for Boolean circuits

    Get PDF
    We study Boolean circuits as a representation of Boolean functions and consider different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard.Comment: 25 pages, 1 figur

    On FPGA-based implementations of Gr\{o}stl

    Get PDF
    The National Institute of Standards and Technology (NIST) has started a competition for a new secure hash standard. To make a significant comparison between the submitted candidates, third party implementations of all proposed hash functions are needed. This is one of the reasons why the SHA-3 candidate Gr\{o}stl has been chosen for a FPGA-based implementation. Mainly our work is motivated by actual and future developments of the automotive market (e.g. car-2-car communication systems), which will increase the necessity for a suitable cryptographic infrastructure in modern vehicles (cf. AUTOSAR project) even further. One core component of such an infrastructure is a secure cryptographic hash function, which is used for a lot of applications like challenge-response authentication systems or digital signature schemes. Another motivation to evaluate Gr\{o}stl is its resemblance to AES. The automotive market demands, like any mass market, low budget and therefore compact implementations, hence our evaluation of Gr\{o}stl focuses on area optimizations. It is shown that, while Gr\{o}stl is inherently quite large compared to AES, it is still possible to implement the Gr\{o}stl algorithm on small and low budget FPGAs like the second smallest available Spartan-3, while maintaining a reasonable high throughput

    On Optimized FPGA Implementations of the SHA-3 Candidate Groestl

    Get PDF
    The National Institute of Standards and Technology (NIST) has started a competition for a new secure hash standard. In this context third party implementations of all proposed hash functions are regarded as an important part of the competition. We chose to implement the Groestl hash function for FPGAs, for its resemblance to AES. More precisely we developed two optimized versions, one optimized for throughput, the other one for area. Both implementations improve the results and estimates presented in the original submission to the competition. The performance of both implementations may be improved further, thus Groestl seems to be a good candidate for implementations on medium sized FPGAs. Besides that, it is shown that Groestl needs a significant amount of resources, which will hinder its use for automotive applications

    On hybrid SIDH schemes using Edwards and Montgomery curve arithmetic

    Get PDF
    Supersingular isogeny Diffie-Hellman (SIDH) is a proposal for a quantum-resistant key exchange. The state-of-the-art implementation works entirely with Montgomery curves and basically can be divided into elliptic curve arithmetic and isogeny arithmetic. It is well known that twisted Edwards curves can provide a more efficient elliptic curve arithmetic. Therefore it was hinted by Costello and Hisil, that by using only Edwards curves for isogeny and curve arithmetic, or a hybrid scheme, that uses Edwards curve arithmetic and switches between the models whenever needed, a speedup in the computation may be gained. Following the latter case, we investigated how to efficiently switch between Montgomery and twisted Edwards curves in SIDH, and how to insert Edwards arithmetic in the current state-of-the-art implementation. We did not gain a speedup compared to the results of Costello, Longa, and Naehrig, but in some cases the performance of Edwards arithmetic is almost equally fast. Thus, we suppose that a hybrid scheme does not improve the performance of SIDH, but still can be interesting for platforms having special hardware acceleration for Edwards curves. However, a full Edwards SIDH version may give a speedup, if fast Edwards isogeny formulas can be found

    Sichere IT ohne Schwachstellen und HintertĂĽren

    Get PDF
    Unsere zunehmende Abhängigkeit von Informationstechnik erhöht kontinuierlich die Safety- und Security-Anforderungen bei deren Einsatz. Ein zentrales Problem hierbei sind Schwachstellen von Hard- und Software. Marktkräfte konnten diese Situation bislang nicht grundsätzlich beheben. Eine Gegenstrategie sollte deshalb folgende Optionen erwägen: (1) private und staatliche Förderung offener und sicherer IT‑Produktion, (2) Verbesserung der souveränen Kontrolle bei der Produktion aller kritischen IT‑Komponenten innerhalb eines Wirtschaftsraumes sowie (3) verbesserte und durchgesetzte Regulierung. Dieser Beitrag analysiert Vor- und Nachteile dieser Optionen. Es wird vorgeschlagen, die Sicherheit der Schlüsselkomponenten einer Lieferkette durch weltweit verteilte, offene und ggf. mathematisch bewiesene Komponenten zu gewährleisten. Der beschriebene Ansatz erlaubt die Nutzung existierender und neuer proprietärer Komponenten

    Verallgemeinerte ErfĂĽllbarkeitsprobleme

    No full text
    In the last 40 years, complexity theory has grown to a rich and powerful field in theoretical computer science. The main task of complexity theory is the classification of problems with respect to their consumption of resources (e.g., running time or required memory). To study the computational complexity (i.e., consumption of resources) of problems, similar problems are grouped into so called complexity classes. During the systematic study of numerous problems of practical relevance, no efficient algorithm for a great number of studied problems was found. Moreover, it was unclear whether such algorithms exist. A major breakthrough in this situation was the introduction of the complexity classes P and NP and the identification of hardest problems in NP. These hardest problems of NP are nowadays known as NP-complete problems. One prominent example of an NP-complete problem is the satisfiability problem of propositional formulas (SAT). Here we get a propositional formula as an input and it must be decided whether an assignment for the propositional variables exists, such that this assignment satisfies the given formula. The intensive study of NP led to numerous related classes, e.g., the classes of the polynomial-time hierarchy PH, P, #P, PP, NL, L and #L. During the study of these classes, problems related to propositional formulas were often identified to be complete problems for these classes. Hence some questions arise: Why is SAT so hard to solve? Are there modifications of SAT which are complete for other well-known complexity classes? In the context of these questions a result by E. Post is extremely useful. He identified and characterized all classes of Boolean functions being closed under superposition. It is possible to study problems which are connected to generalized propositional logic by using this result, which was done in this thesis. Hence, many different problems connected to propositional logic were studied and classified with respect to their computational complexity, clearing the borderline between easy and hard problems.In den letzten 40 Jahren hat sich die Komplexitätstheorie zu einem reichen und mächtigen Gebiet innerhalb der theoretischen Informatik entwickelt. Dabei ist die hauptsächliche Aufgabenstellung der Komplexitätstheorie die Klassifikation von Problemen bezüglich des Bedarfs von Rechenzeit oder Speicherplatz zu ihrer Lösung. Um die Komplexität von Problemen (d.h. den Bedarf von Resourcen) einzuordnen, werden Probleme mit ähnlichem Ressourcenbedarf in gleiche sogenannte Komplexitätsklassen einsortiert. Bei der systematischen Untersuchung einer Vielzahl von praktisch relevanten Problemen wurden jedoch keine effizienten Algorithmen für viele der untersuchten Probleme gefunden und es ist unklar, ob solche Algorithmen überhaupt existieren. Ein Durchbruch bei der Untersuchung dieser Problematik war die Einführung der Komplexitätsklassen P und NP und die Identifizierung von schwersten Problemen in NP. Diese schwierigsten Probleme von NP sind heute als sogenannte NP-vollständige Probleme bekannt. Ein prominentes Beispiel für ein NP-vollständiges Problem ist das Erfüllbarkeitsproblem für aussagenlogische Formeln (SAT). Hier ist eine aussagenlogische Formel als Eingabe gegeben und es muss bestimmt werden, ob eine Belegung der Wahrheitswertevariablen existiert, so dass diese Belegung die gegebene Formel erfüllt. Das intensive Studium der Klasse NP führte zu einer Vielzahl von anderen Komplexitätsklassen, wie z.B. die der Polynomialzeithierarchie PH, P, #P, PP, NL, L oder #L. Beim Studium dieser Klassen wurden sehr oft Probleme im Zusammenhang mit aussagenlogischen Formeln als schwierigste (vollständige) Probleme für diese Klassen identifiziert. Deshalb stellt sich folgende Frage: Welche Eigenschaften des Erfüllbarkeitsproblems SAT bewirken, dass es eines der schwersten Probleme der Klasse NP ist? Gibt es Einschränkungen oder Verallgemeinerungen des Erfüllbarkeitsproblems, die vollständig für andere bekannte Komplexitätsklassen sind? Im Zusammenhang mit solchen Fragestellungen ist ein Ergebnis von E. Post von extremem Nutzen. Er identifizierte und charakterisierte alle Klassen von Booleschen Funktionen, die unter Superposition abgeschlossen sind. Mit Hilfe dieses Resultats ist es möglich, Probleme im Zusammenhang mit verallgemeinerten Aussagenlogiken zu studieren, was in der vorliegenden Arbeit durchgeführt wurde. Dabei wurde eine Vielzahl von verschiedenen Problemen, die in Zusammenhang mit der Aussagenlogik stehen, studiert und bezüglich ihrer Komplexität klassifiziert. Dadurch wird die Grenzlinie zwischen einfach lösbaren Problemen und schweren Problemen sichtbar

    On High and Low Sets for the Boolean Hierarchy

    No full text
    The polynomial-time hierarchy (PH) is central for many considerations of complexity theory. We call a set A 2 NP high (low, resp.) for the class \Sigma p k of the polynomial-time hierarchy if \Sigma p k relativized to the oracle A yields \Sigma p k+1 (\Sigma p k , resp). These concept of high and low sets originates from recursion theory ([Coo74] and [Soa74]) and was translated into the terms of the polynomialtime hierarchy by Schoning ([Sch85]). Another important hierarchy in complexity theory is the boolean hierarchy (BH) ([WW85]), which is located between the classes NP and \Theta p 2 of the polynomial-time hierarchy. In this paper we introduce a concept of highness and lowness for the boolean hierarchy. Informally a set A 2 NP is high (low, resp.) for the class NP(k) of the boolean hierarchy, if certain boolean combinations of A with NP sets yield NP(k + 1 ) (co-NP(k), resp.). Using the technique from [CK96], we can show, that every low set for NP(k) is low for \Sigma p..
    • …
    corecore