109 research outputs found

    Provably Correct Derivation of Algorithms Using FermaT

    Get PDF
    The transformational programming method of algorithm derivation starts with a formal specification of the result to be achieved, plus some informal ideas as to what techniques will be used in the implementation. The formal specification is then transformed into an implementation, by means of correctness-preserving refinement and transformation steps, guided by the informal ideas. The transformation process will typically include the following stages: (1) Formal specification (2) Elaboration of the specification, (3) Divide and conquer to handle the general case (4) Recursion introduction, (5) Recursion removal, if an iterative solution is desired, (6) Optimisation, if required. At any stage in the process, sub-specifications can be extracted and transformed separately. The main difference between this approach and the invariant based programming approach (and similar stepwise refinement methods) is that loops can be introduced and manipulated while maintaining program correctness and with no need to derive loop invariants. Another difference is that at every stage in the process we are working with a correct program: there is never any need for a separate "verification" step. These factors help to ensure that the method is capable of scaling up to the development of large and complex software systems. The method is applied to the derivation of a complex linked list algorithm and produces code which is over twice as fast as the code written by Donald Knuth to solve the same problem

    Termination, correctness and relative correctness

    Get PDF
    Over the last decade, research in verification and formal methods has been the subject of increased interest with the need of more secure and dependable software. At the heart of software dependability is the concept of software fault, defined in the literature as the adjudged or hypothesized cause of an error. This definition, which lacks precision, presents at least two challenges with regard to using formal methods: (1) Adjudging and hypothesizing are highly subjective human endeavors; (2) The concept of error is itself insufficiently defined, since it depends on a detailed characterization of correct system states at each stage of a computation (which is usually unavailable). In the process of defining what a software fault is, the concept of relative correctness, the property of a program to be more-correct than another with respect to a given specification, is discussed. Subsequently, a feature of a program is a fault (for a given specification) only because there exists an alternative to it that would make the program more-correct with respect to the specification. Furthermore, the implications and applications of relative correctness in various software engineering activities are explored. It is then illustrated that in many situations of software testing, fault removal and program repair, testing for relative correctness rather than absolute correctness leads to clearer conclusions and better outcomes. In particular, debugging without testing, a technique whereby, a fault can be removed from a program and the new program proven to be more-correct than the original, all without any testing (and its associated uncertainties/imperfections) is introduced. Given that there are orders of magnitude more incorrect programs than correct programs in use nowadays, this has the potential to expand the scope of proving methods significantly. Another technique, programming without refining, is also introduced. The most important advantage of program derivation by correctness enhancement is that it captures not only program construction from scratch, but also virtually all activities of software evolution. Given that nowadays most software is developed by evolving existing assets rather than producing new assets from scratch, the paradigm of software evolution by correctness enhancements stands to yield significant gains, if we can make it practical

    Sensor-based robot deployment algorithms

    Get PDF
    Abstract — In robot deployment problems, the fundamental issue is to optimize a steady state performance measure that depends on the spatial configuration of a group of robots. For such problems, a classical way of designing high-level feedback motion planners is to implement a gradient descent scheme on a suitably chosen objective function. This can lead to computationally expensive algorithms that may not be adaptive to uncertain dynamic environments. We address these challenges by showing that algorithms for a variety of deployment scenarios in uncertain stochastic environments and with noisy sensor measurements can be designed as stochastic gradient descent algorithms, and their convergence properties analyzed via the theory of stochastic approximations. This approach yields often surprisingly simple algorithms that can accommodate complicated objective functions, and work without a detailed model of the environment. To illustrate the richness of the framework, we discuss two applications, namely source seeking with realistic stochastic wireless connectivity constraints, and coverage with heterogeneous sensors. I

    A wide spectrum type system for transformation theory

    Get PDF
    One of the most difficult tasks a programmer can be confronted with is the migration of a legacy system. Usually, these systems are unstructured, poorly documented and contain complex program logic. The reason for this, in most cases, is an emphasis on raw performance rather than on clean and structured code as well as a long period of applying quick fixes and enhancements rather than doing a proper software reengineering process including a full redesign during major enhancements. Nowadays, the old programming paradigms are becoming an increasingly serious problem. It has been identified that 90% of the costs of a typical software system arise in the maintenance phase. Many companies are simply too afraid of changing their software infrastructure and prefer to continue with principles like "never touch a running system". These companies experience growing pressure to migrate their legacy systems onto newer platforms because the maintenance of such systems is expensive and dangerous as the risk of losing vital parts of sources code or its documentation increases drastically over time. The FermaT transformation system has shown the ability to automatically or semi-automatically restructure and abstract legacy code within a special intermediate language called WSL (Wide Spectrum Language). Unfortunately, the current transformation process only supports the migration of assembler as WSL lacks the ability to handle data types properly. The data structures in assembler are currently directly translated into C data types which involves many assumptional “hard coded” conversions. The absence of an adequate type system for WSL caused several flaws for the whole transformation process and limits its abilities significantly. The main aim of the presented research is to tackle these problems by investigating and formulating how a type system can contribute to a safe and reliable migration of legacy systems. The described research includes the definition of key aspects of type related problems in the FermaT migration process and how to solve them with a suitable type system approach. Since software migration often includes a change in programming language the type system for WSL has to be able to support various type system approaches including the representation of all relevant details to avoid assumptions. This is especially difficult as most programming languages are designed for a special purpose which means that their possible programming constructs and data types differ significantly. This ranges from languages with simple type systems whose program sare prone to unintended side-effects, to languages with strict type systems which are constrained n their flexibility. It is important to include as many type related details as necessary to avoid making assumptions during language to language translation. The result of the investigation is a novel multi layered type system specifically designed to satisfy the needs of WSL for a sophisticated solution without imposing too many limitations on its abilities. The type system has an adjustable expressiveness, able to represent a wide spectrum of typing approaches ranging from weak typing which allows direct memory access and down casting, via very strict typing with a high diversity of data types to object oriented typing which supports encapsulation and data hiding. Looking at the majority of commercial relevant statically typed programming languages, two fundamental properties of type strictness and safety can be identified. A type system can be either weakly or strongly typed and may or may not allow unsafe features such as direct memory access. Each layer of the Wide Spectrum Type System has a different combination of these properties. The approach also includes special Type System Transformations which can be used to move a given WSL program among these layers. Other emphasised key features are explicit typing and scalability. The whole approach is based on a sound mathematical foundation which assures correctness and integrates seamlessly into the present mathematical definition of WSL. The type system is formally introduced to WSL by constructing an attribute grammar for the language. Type checking and type inference are used to annotate the Abstract Syntax Tree of a given WSL program with type derivations which can be used to reveal and indicate possible typing errors or to infer types if the program did not feature explicit type declarations in the first place. Notable in this approach is also the fact that object orientation is introduced to a procedural programming language without the introduction of new semantics. It is shown that object orientation can be introduced just by adjusting type checking rules and adding some syntactical notations. The approach was implemented and tested on two case studies. The thesis describes and discusses both cases in detail and shows how a migration which ignores type systems could accidentally introduce errors due to assumptions during translation. Both case studies use all important aspects of the approach, Including type transformations and object identification. The thesis finalises by summarising the whole work, identifying limitations, presenting future perspectives and drawing conclusion

    Adaptive Robot Deployment Algorithms

    Get PDF
    In robot deployment problems, the fundamental issue is to optimize a steady state performance measure that depends on the spatial configuration of a group of robots. For static deployment problems, a classical way of designing high- level feedback motion planners is to implement a gradient descent scheme on a suitably chosen objective function. This can lead to computationally expensive deployment algorithms that may not be adaptive to uncertain dynamic environments. We address this challenge by showing that algorithms for a variety of deployment scenarios in stochastic environments and with noisy sensor measurements can be designed as stochastic gradient descent algorithms, and their convergence properties analyzed via the theory of stochastic approximations. This approach yields often surprisingly simple algorithms that can accommodate complicated objective functions, and adapt to slow temporal variations in environmental parameters. To illustrate the richness of the framework, we discuss several applications, including searching for a field extrema, deployment with stochastic connectivity constraints, coverage, and vehicle routing scenarios

    Book Review: Inevitable randomness in discrete mathematics

    Full text link

    Studies on the Security of Selected Advanced Asymmetric Cryptographic Primitives

    Get PDF
    The main goal of asymmetric cryptography is to provide confidential communication, which allows two parties to communicate securely even in the presence of adversaries. Ever since its invention in the seventies, asymmetric cryptography has been improved and developed further, and a formal security framework has been established around it. This framework includes different security goals, attack models, and security notions. As progress was made in the field, more advanced asymmetric cryptographic primitives were proposed, with other properties in addition to confidentiality. These new primitives also have their own definitions and notions of security. This thesis consists of two parts, where the first relates to the security of fully homomorphic encryption and related primitives. The second part presents a novel cryptographic primitive, and defines what security goals the primitive should achieve. The first part of the thesis consists of Article I, II, and III, which all pertain to the security of homomorphic encryption schemes in one respect or another. Article I demonstrates that a particular fully homomorphic encryption scheme is insecure in the sense that an adversary with access only to the public material can recover the secret key. It is also shown that this insecurity mainly stems from the operations necessary to make the scheme fully homomorphic. Article II presents an adaptive key recovery attack on a leveled homomorphic encryption scheme. The scheme in question claimed to withstand precisely such attacks, and was the only scheme of its kind to do so at the time. This part of the thesis culminates with Article III, which is an overview article on the IND-CCA1 security of all acknowledged homomorphic encryption schemes. The second part of the thesis consists of Article IV, which presents Vetted Encryption (VE), a novel asymmetric cryptographic primitive. The primitive is designed to allow a recipient to vet who may send them messages, by setting up a public filter with a public verification key, and providing each vetted sender with their own encryption key. There are three different variants of VE, based on whether the sender is identifiable to the filter and/or the recipient. Security definitions, general constructions and comparisons to already existing cryptographic primitives are provided for all three variants.Doktorgradsavhandlin

    p-adic methods for rational points on curves of genus g2

    Get PDF
    Nel 1922 Mordell ha congetturato che ogni curva di genere maggiore o uguale a 2 ha solo un numero finito di punti a coordinate razionali. Ciò è stato dimostrato da Faltings nel 1983, ma prima di lui Chabauty ha dimostrato, nel 1941, che questo è vero nell'ipotesi aggiuntiva che il rango di Mordell-Weil della varietà Jacobiana su Q sia strettamente minore del genere della curva. Sebbene questo non dimostri la congettura di Mordell, queste idee hanno portato alla creazione di nuovi metodi per la ricerca di punti razionali su curve. Oltre agli aspetti teorici, si studieranno anche le applicazioni alla ricerca delle soluzioni di alcune equazioni diofantee
    • …
    corecore