10 research outputs found

    Independence of the Grossone-Based Infinity Methodology from Non-standard Analysis and Comments upon Logical Fallacies in Some Texts Asserting the Opposite

    Get PDF
    This commentary considers non-standard analysis and a recently introduced computational methodology based on the notion of \G1 (this symbol is called \emph{grossone}). The latter approach was developed with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework and in all the situations requiring these notions. Non-standard analysis is a classical purely symbolic technique that works with ultrafilters, external and internal sets, standard and non-standard numbers, etc. In its turn, the \G1-based methodology does not use any of these notions and proposes a more physical treatment of mathematical objects separating the objects from tools used to study them. It both offers a possibility to create new numerical methods using infinities and infinitesimals in floating-point computations and allows one to study certain mathematical objects dealing with infinity more accurately than it is done traditionally. In these notes, we explain that even though both methodologies deal with infinities and infinitesimals, they are independent and represent two different philosophies of Mathematics that are not in a conflict. It is proved that texts \cite{Flunks, Gutman_Kutateladze_2008, Kutateladze_2011} asserting that the \G1-based methodology is a part of non-standard analysis unfortunately contain several logical fallacies. Their attempt to prove that the \G1-based methodology is a part of non-standard analysis is similar to trying to show that constructivism can be reduced to the traditional mathematics.Comment: 19 pages, 1 Tabl

    Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert problems

    Get PDF
    In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems

    Design and Optimization of Micro-Machined Sierpinski Carpet Fractal Antenna Using Ant Lion Optimization

    Get PDF
    This study investigates the optimized Sierpinski carpet fractal patch antenna and also explores the possibility of the integration of the proposed design with monolithic microwave integrated circuits. The optimization process has been performed using an ant lion optimization algorithm to achieve the required operating frequency and impedance matching. Further, due to surface waves excitation in the high index substrates used for the antenna design, the performance of the antenna degrades. Therefore, a process of micro-machining has been adopted to overcome this limitation. The micro-machining process creates an air cavity underneath the patch which further creates the low index environment in the patch antenna causing drastic improvement in the performance parameters along with the compatibility with monolithic microwave integrated circuits. The design shows multiple resonance frequencies in X-band and Ku-band. The proposed micro-machined design shows the resonance at 7.9 GHz, 9.6 GHz, 13.6 GHz, and 19 GHz with a maximum gain of 6 dBi.&nbsp

    Some paradoxes of infinity revisited

    Get PDF
    In this article, some classical paradoxes of infinity such as Galileo’s paradox, Hilbert’s paradox of the Grand Hotel, Thomson’s lamp paradox, and the rectangle paradox of Torricelli are considered. In addition, three paradoxes regarding divergent series and a new paradox dealing with multiplication of elements of an infinite set are also described. It is shown that the surprising counting system of an Amazonian tribe, Pirah ̃a, working with only three numerals (one, two, many) can help us to change our perception of these paradoxes. A recently introduced methodology allowing one to work with finite, infinite, and infinitesimal numbers in a unique computational framework not only theoretically but also numerically is briefly described. This methodology is actively used nowadays in numerous applications in pure and applied mathematics and computer science as well as in teaching. It is shown in the article that this methodology also allows one to consider the paradoxes listed above in a new constructive ligh

    Windows on the infinite : constructing meanings in a Logo-based microworld.

    Get PDF
    This thesis focuses on how people think about the infinite. A review of both the\ud historical and psychological/educational literature, reveals a complexity which\ud sharpens the research questions and informs the methodology. Furthermore, the areas\ud of mathematics where infinity occurs are those that have traditionally been presented\ud to students mainly from an algebraic/symbolic perspective, which has tended to make\ud it difficult to link formal and intuitive knowledge. The challenge is to create situations\ud in which infinity can become more accessible. My theoretical approach follows the\ud constructionist paradigm, adopting the position that the construction of meanings\ud involves the use of representations; that representations are tools for understanding;\ud and that the learning of a concept is facilitated when there are more opportunities of\ud constructing and interacting with external representations of a concept, which are as\ud diverse as possible.\ud Based on this premise, I built a computational set of open tools — a\ud microworld — which could simultaneously provide its users with insights into a range\ud of infinity-related ideas, and offer the researcher a window into the users' thinking\ud about the infinite. The microworld provided a means for students to construct and\ud explore different types of representations — symbolic, graphical and numerical — of\ud infinite processes via programming activities. The processes studied were infinite\ud sequences and the construction of fractals. The corpus of data is based on case studies\ud of 8 individuals, whose ages ranged from 14 to mid-thirties, interacting as pairs with\ud the microworld. These case studies served as the basis for an analysis of the ways in\ud which the tools of the microworld structured, and were structured by, the activities.\ud The findings indicate that the environment and its tools shaped students'\ud understandings of the infinite in rich ways, allowing them to discriminate subtle\ud process-oriented features of infinite processes, and permitted the students to deal with\ud the complexity of the infinite by assisting them in coordinating the different\ud epistemological elements present. On a theoretical level, the thesis elaborates and\ud refines the notion of situated abstraction and introduces the idea of "situated proof"

    Computation of higher order Lie derivatives on the Infinity Computer

    Get PDF
    In this paper, we deal with the computation of Lie derivatives, which are required, for example, in some numerical methods for the solution of differential equations. One common way for computing them is to use symbolic computation. Computer algebra software, however, might fail if the function is complicated, and cannot be even performed if an explicit formulation of the function is not available, but we have only an algorithm for its computation. An alternative way to address the problem is to use automatic differentiation. In this case, we only need the implementation of the algorithm that evaluates the function in terms of its analytic expression in a programming language, but we cannot use this if we have only a compiled version of the function. In this paper, we present a novel approach for calculating the Lie derivative of a function, even in the case where its analytical expression is not available, that is based on the Infinity Computer arithmetic. A comparison with symbolic and automatic differentiation shows the potentiality of the proposed technique

    Logic and intuition in architectural modelling: philosophy of mathematics for computational design

    Get PDF
    This dissertation investigates the relationship between the shift in the focus of architectural modelling from object to system and philosophical shifts in the history of mathematics that are relevant to that change. Particularly in the wake of the adoption of digital computation, design model spaces are more complex, multidimensional, arguably more logical, less intuitive spaces to navigate, less accessible to perception and visual comprehension. Such spatial issues were encountered much earlier in mathematics than in architectural modelling, with the growth of analytical geometry, a transition from Classical axiomatic proofs in geometry as the basis of mathematics, to analysis as the underpinning of geometry. Can the computational design modeller learn from the changing modern history, philosophy and psychology of mathematics about the construction and navigation of computational geometrical architectural system model space? The research is conducted through a review of recent architectural project examples and reference to three more detailed architectural modelling case studies. The spatial questions these examples and case studies raise are examined in the context of selected historical writing in the history, philosophy and psychology of mathematics and space. This leads to conclusions about changes in the relationship of architecture and mathematics, and reflections on the opportunities and limitations for architectural system models using computation geometry in the light of this historical survey. This line of questioning was motivated as a response to the experience of constructing digital associative geometry models and encountering the apparent limits of their flexibility as the graph of dependencies grew and the messiness of the digital modelling space increased. The questions were inspired particularly by working on the Narthex model for the Sagrada Família church, which extends to many tens of thousands of relationships and constraints, and which was modelled and repeatedly partially remodelled over a very long period. This experience led to the realisation that the limitations of the model were not necessarily the consequence of poor logical schema definition, but could be inevitable limitations of the geometry as defined, regardless of the means of defining it, the ‘shape’ of the multidimensional space being created. This led to more fundamental questions about the nature of Space, its relationship to geometry and the extent to which the latter can be considered simply as an operational and notational system. This dissertation offers a purely inductive journey, offering evidence through very selective examples in architecture, architectural modelling and in the philosophy of mathematics. The journey starts with some questions about the tendency of the model space to break out and exhibit unpredictable and not always desirable behaviour and the opportunities for geometrical construction to solve these questions is not conclusively answered. Many very productive questions about computational architectural modelling are raised in the process of looking for answers

    The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines

    The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
    corecore