179 research outputs found

    Bibliographie

    Get PDF

    Adapting to Computer Science

    Get PDF
    Although I am not an engineer who adapted himself to computer science but a mathematician who did so, I am familiar enough with the development, concepts, and activities of this new discipline to venture an opinion of what must be adapted to in it. Computer and Information Science is known as Informatics on the European continent. It was born as a distinct discipline barely a generation ago. As a fresh young discipline, it is an effervescent mixture of formal theory, empirical applications, and pragmatic design. Mathematics was just such an effervescent mixture in western culture from the renaissance to the middle of the twentieth century. It was then that the dynamic effect of high speed, electronic, general purpose computers accelerated the generalization of the meaning of the word computation This caused the early computer science to recruit not only mathematicians but also philosophers (especially logicians), linguists, psychologists, even economists, as well as physicists, and a variety of engineers. Thus we are, perforce, discussing the changes and adaptations of individuals to disciplines, and especially of people in one discipline to another. As we all know, the very word discipline indicates that there is an initial special effort by an individual to force himself or herself to change. The change involves adaptation of one\u27s perceptions to a special way of viewing certain aspects of the - world, and also one\u27s behavior in order to produce special results. For example we are familiar with the enormous prosthetic devices that physicists have added to their natural sensors and perceptors in order to perceive minute particles and to smash atoms in order to do so (at, we might add, enormous expense, and enormous stretching of computational activity). We are also familiar with the enormously intricate prosthetic devices mathematicians added to their computational effectors, the general symbol manipulators, called computers

    Limits of Diagonalization and the Polynomial Hierarchy

    Get PDF
    Determining the computational complexity of problems is a large area of study. It seeks to separate these problems into ones with efficient solutions, and those with inefficient solutions. Of course, the strata is much more fine-grain than this. Of special interest are two classes of problems: P and NP. These have been of much interest to complexity theorists for quite some time, because both contain many instances of important real-world problems, and finding efficient solutions for those in NP would be beneficial for computing applications. Yet with all this attention, there are still important unanswered questions about the two classes. It is known that P ⊆ NP, however it is still unknown whether P = NP or if P ⊂ NP. Before we discuss why this problem is so crucial to complexity theory, an overview of P, NP, and coNP is necessary. The class P is a model of the notion of efficiently solvable , and thus contains all languages (problems) that are decidable in deterministic polynomial time. This means that any language in P has a deterministic Turing Machine (algorithm) that will either accept or reject any input in n^k steps, where n is the length of the input string, and k is a constant. The class NP contains all languages that are decidable in nondeterministic polynomial time. A nondeterministic Turing Machine is one that is allowed to guess the correct path of computation, and seems to be able to reach an accept or reject state faster than if it was forced to run deterministically. It is unknown whether NP is closed under complementation because of this nondeterminism. It is quite easy to show a class of deterministically-solvable languages (such as P) is closed under complementation: we simply reverse the accept and reject states. This method is not viable for a nondeterministic machine, since switching the accept and reject states will result in machine that computes a completely different language. Thus the class coNP is defined as containing the complement of every language in NP. In the rest of this paper we will present structural definitions of P and NP as well as present example languages from each. These structural definitions will give insight into the arrangement of the polynomial hierarchy, which is discussed in section 3. A diagonalization proof is presented in section 4, and an explanation of the general usage of diagonalization follows. In section 5, universal languages are defined and an important result from Kozen is given. In the final section, the limits of diagonalization as they pertain to P and NP are outlined, as well as the same limits for relativized classes

    Complexity measures for object-oriented conceptual models of an application domain.

    Get PDF
    According to Norman Fenton few work has been done on measuring the complexity of the problems underlying software development. Nonetheless, it is believed that this attribute has a significant impact on software quality and development effort. A substantial portion of the underlying problems are captured in the conceptual model of the application domain. Based on previous work on conceptual modelling of aplication domains, the attribute 'complexity of a conceptual model' is formally defined in this papaer using elementary concepts from Measure Theory. Moreover, a number of complexity measures are defined and validated against this complexity definition. It is argued and demonstrated that these problem domain measures are part of a solution to the problem outlined by Norman Fenton.Model; Models;

    Towards Efficient Hardware Implementation of Elliptic and Hyperelliptic Curve Cryptography

    Get PDF
    Implementation of elliptic and hyperelliptic curve cryptographic algorithms has been the focus of a great deal of recent research directed at increasing efficiency. Elliptic curve cryptography (ECC) was introduced independently by Koblitz and Miller in the 1980s. Hyperelliptic curve cryptography (HECC), a generalization of the elliptic curve case, allows a decreasing field size as the genus increases. The work presented in this thesis examines the problems created by limited area, power, and computation time when elliptic and hyperelliptic curves are integrated into constrained devices such as wireless sensor network (WSN) and smart cards. The lack of a battery in wireless sensor network limits the processing power of these devices, but they still require security. It was widely believed that devices with such constrained resources cannot incorporate a strong HECC processor for performing cryptographic operations such as elliptic curve scalar multiplication (ECSM) or hyperelliptic curve divisor multiplication (HCDM). However, the work presented in this thesis has demonstrated the feasibility of integrating an HECC processor into such devices through the use of the proposed architecture synthesis and optimization techniques for several inversion-free algorithms. The goal of this work is to develop a hardware implementation of binary elliptic and hyperelliptic curves. The focus is on the modeling of three factors: register allocation, operation scheduling, and storage binding. These factors were then integrated into architecture synthesis and optimization techniques in order to determine the best overall implementation suitable for constrained devices. The main purpose of the optimization is to reduce the area and power. Through analysis of the architecture optimization techniques for both datapath and control unit synthesis, the number of registers was reduced by an average of 30%. The use of the proposed efficient explicit formula for the different algorithms also enabled a reduction in the number of read/write operations from/to the register file, which reduces the processing power consumption. As a result, an overall HECC processor requires from 1843 to 3595 slices for a Xilinix XC4VLX200 and the total computation time is limited to between 10.08 ms to 15.82 ms at a maximum frequency of 50 MHz for a varity of inversion-free coordinate systems in hyperelliptic curves. The value of the new model has been demonstrated with respect to its implementation in elliptic and hyperelliptic curve crypogrpahic algorithms, through both synthesis and simulations. In summary, a framework has been provided for consideration of interactions with synthesis and optimization through architecture modeling for constrained enviroments. Insights have also been presented with respect to improving the design process for cryptogrpahic algorithms through datapath and control unit analysis

    Poincaré's philosophy of mathematics

    Get PDF
    The primary concern of this thesis is to investigate the explicit philosophy of mathematics in the work of Henri Poincare. In particular, I argue that there is a well-founded doctrine which grounds both Poincare's negative thesis, which is based on constructivist sentiments, and his positive thesis, via which he retains a classical conception of the mathematical continuum. The doctrine which does so is one which is founded on the Kantian theory of synthetic a priori intuition. I begin, therefore, by outlining Kant's theory of the synthetic a priori, especially as it applies to mathematics. Then, in the main body of the thesis, I explain how the various central aspects of Poincare's philosophy of mathematics - e.g. his theory of induction; his theory of the continuum; his views on impredicativiti his theory of meaning - must, in general, be seen as an adaptation of Kant's position. My conclusion is that not only is there a well-founded philosophical core to Poincare's philosophy, but also that such a core provides a viable alternative in contemporary debates in the philosophy of mathematics. That is, Poincare's theory, which is secured by his doctrine of a priori intuitions, and which describes a position in between the two extremes of an "anti-realist" strict constructivism and a "realist" axiomatic set theory, may indeed be true

    Towards the automation of mathematical reasoning

    Get PDF

    On diagonal argument, Russell absurdities and an uncountable notion of lingua characterica

    Get PDF
    vii, 111 leaves ; 29 cm.There is an interesting connection between cardinality of language and the distinction of lingua characterica from calculus rationator. Calculus-type languages have only a countable number of sentences, and only a single semantic valuation per sentence. By contrast, some of the sentences, and only a single semantic valuation per sentence. By contrast, some of the sentences of a lingua have available an uncountable number of semantic valuations. Thus, the lingua-type of language appears to have a greater degree of semantic universality than that of a calculus. It is suggested that the present notion of lingua provides a platform for a theory of ambiguity, whereby single sentences may have multiply - indeed, uncountably - many semantic valuations. It is further suggested that this might lead to a pacification of paradox. This thesis involves Peter Aczel's notion of a universal syntax, Russell's question, Keith Simmons' theory of diagonal argument, Curry's paradox, and a 'Leibnizian' notion of language
    • …
    corecore