7 research outputs found

    Faster fixed-parameter tractable algorithms for matching and packing problems. In:

    Get PDF
    Abstract We obtain faster algorithms for problems such as r-dimensional matching and r-set packing when the size k of the solution is considered a parameter. We first establish a general framework for finding and exploiting small problem kernels (of size polynomial in k). This technique lets us combine Alon, Yuster and Zwick's colorcoding technique with dynamic programming to obtain faster fixed-parameter algo- rithms for these problems. Our algorithms run in time O(n + 2 O(k) ), an improvement over previous algorithms for some of these problems running in time O(n + k O(k) ). The flexibility of our approach allows tuning of algorithms to obtain smaller constants in the exponent

    Beta Reduction is Invariant, Indeed (Long Version)

    Full text link
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is λ\lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a λ\lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of λ\lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating λ\lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic and proof-nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the λ\lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the λ\lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to β\beta-redexes, i.e., the steps that cause the blow-up in size.Comment: 29 page

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of logn\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Confronting intractability via parameters

    Full text link

    Analyse von IT-Anwendungen mittels Zeitvariation

    Get PDF
    Performanzprobleme treten in der Praxis von IT-Anwendungen häufig auf, trotz steigender Hardwareleistung und verschiedenster Ansätze zur Entwicklung performanter Software im Softwarelebenszyklus. Modellbasierte Performanzanalysen ermöglichen auf Basis von Entwurfsartefakten eine Prävention von Performanzproblemen. Bei bestehenden oder teilweise implementierten IT-Anwendungen wird versucht, durch Hardwareskalierung oder Optimierung des Codes Performanzprobleme zu beheben. Beide Ansätze haben Nachteile: modellbasierte Ansätze werden durch die benötigte hohe Expertise nicht generell genutzt, die nachträgliche Optimierung ist ein unsystematischer und unkoordinierter Prozess. Diese Dissertation schlägt einen neuen Ansatz zur Performanzanalyse für eine nachfolgende Optimierung vor. Mittels eines Experiments werden Performanzwechselwirkungen in der IT-Anwendung identifiziert. Basis des Experiments, das Analyseinstrumentarium, ist eine zielgerichtete, zeitliche Variation von Start-, Endzeitpunkt oder Laufzeitdauer von Abläufen der IT-Anwendung. Diese Herangehensweise ist automatisierbar und kann strukturiert und ohne hohen Lernaufwand im Softwareentwicklungsprozess angewandt werden. Mittels der Turingmaschine wird bewiesen, dass durch die zeitliche Variation des Analyseinstrumentariums die Korrektheit von sequentiellen Berechnung beibehalten wird. Dies wird auf nebenläufige Systeme mittels der parallelen Registermaschine erweitert und diskutiert. Mit diesem praxisnahen Maschinenmodell wird dargelegt, dass die entdeckten Wirkzusammenhänge des Analyseinstrumentariums Optimierungskandidaten identifizieren. Eine spezielle Experimentierumgebung, in der die Abläufe eines Systems, bestehend aus Software und Hardware, programmierbar variiert werden können, wird mittels einer Virtualisierungslösung realisiert. Techniken zur Nutzung des Analyseinstrumentariums durch eine Instrumentierung werden angegeben. Eine Methode zur Ermittlung von Mindestanforderungen von IT-Anwendungen an die Hardware wird präsentiert und mittels der Experimentierumgebung anhand von zwei Szenarios und dem Android Betriebssystem exemplifiziert. Verschiedene Verfahren, um aus den Beobachtungen des Experiments die Optimierungskandidaten des Systems zu eruieren, werden vorgestellt, klassifiziert und evaluiert. Die Identifikation von Optimierungskandidaten und -potenzial wird an Illustrationsszenarios und mehreren großen IT-Anwendungen mittels dieser Methoden praktisch demonstriert. Als konsequente Erweiterung wird auf Basis des Analyseinstrumentariums eine Testmethode zum Validieren eines Systems gegenüber nicht deterministisch reproduzierbaren Fehlern, die auf Grund mangelnder Synchronisationsmechanismen (z.B. Races) oder zeitlicher Abläufe entstehen (z.B. Heisenbugs, alterungsbedingte Fehler), angegeben.Performance problems are very common in IT-Application, even though hardware performance is consistently increasing and there are several different software performance engineering methodologies during the software life cycle. The early model based performance predictions are offering a prevention of performance problems based on software engineering artifacts. Existing or partially implemented IT-Applications are optimized with hardware scaling or code tuning. There are disadvantages with both approaches: the model based performance predictions are not generally used due to the needed high expertise, the ex post optimization is an unsystematic and unstructured process. This thesis proposes a novel approach to a performance analysis for a subsequent optimization of the IT-Application. Via an experiment in the IT-Application performance interdependencies are identified. The core of the analysis is a specific variation of start-, end time or runtime of events or processes in the IT-Application. This approach is automatic and can easily be used in a structured way in the software development process. With a Turingmachine the correctness of this experimental approach was proved. With these temporal variations the correctness of a sequential calculation is held. This is extended and discussed on concurrent systems with a parallel Registermachine. With this very practical machine model the effect of the experiment and the subsequent identification of optimization potential and candidates are demonstrated. A special experimental environment to vary temporal processes and events of the hardware and the software of a system was developed with a virtual machine. Techniques for this experimental approach via instrumenting are stated. A method to determine minimum hardware requirements with this experimental approach is presented and exemplified with two scenarios based on the Android Framework. Different techniques to determine candidates and potential for an optimization are presented, classified and evaluated. The process to analyze and identify optimization candidates and potential is demonstrated on scenarios for illustration purposes and real IT-Applications. As a consistent extension a test methodology enabling a test of non-deterministic reproducible errors is given. Such non-deterministic reproducible errors are faults in the system caused by insufficient synchronization mechanisms (for example Races or Heisenbugs) or aging-related faults
    corecore