13 research outputs found
Pseudo-factorials, elliptic functions, and continued fractions
This study presents miscellaneous properties of pseudo-factorials, which are
numbers whose recurrence relation is a twisted form of that of usual
factorials. These numbers are associated with special elliptic functions, most
notably, a Dixonian and a Weierstrass function, which parametrize the Fermat
cubic curve and are relative to a hexagonal lattice. A continued fraction
expansion of the ordinary generating function of pseudo-factorials, first
discovered empirically, is established here. This article also provides a
characterization of the associated orthogonal polynomials, which appear to form
a new family of "elliptic polynomials", as well as various other properties of
pseudo-factorials, including a hexagonal lattice sum expression and elementary
congruences.Comment: 24 pages; with correction of typos and minor revision. To appear in
The Ramanujan Journa
Some exactly solvable models of urn process theory
International audienceWe establish a fundamental isomorphism between discrete-time balanced urn processes and certain ordinary differential systems, which are nonlinear, autonomous, and of a simple monomial form. As a consequence, all balanced urn processes with balls of two colours are proved to be analytically solvable in finite terms. The corresponding generating functions are expressed in terms of certain Abelian integrals over curves of the Fermat type (which are also hypergeometric functions), together with their inverses. A consequence is the unification of the analyses of many classical models, including those related to the coupon collector's problem, particle transfer (the Ehrenfest model), Friedman's "adverse campaign'' and Pólya's contagion model, as well as the OK Corral model (a basic case of Lanchester's theory of conflicts). In each case, it is possible to quantify very precisely the probable composition of the urn at any discrete instant. We study here in detail "semi-sacrificial'' urns, for which the following are obtained: a Gaussian limiting distribution with speed of convergence estimates as well as a characterization of the large and extreme large deviation regimes. We also work out explicitly the case of -dimensional triangular models, where local limit laws of the stable type are obtained. A few models of dimension three or greater, e.g., "autistic'' (generalized Pólya), cyclic chambers (generalized Ehrenfest), generalized coupon-collector, and triangular urns, are also shown to be exactly solvable
Introduction to the Gopakumar-Vafa Large N Duality
Gopakumar-Vafa large N duality is a correspondence between Chern-Simons
invariants of a link in a 3-manifold and relative Gromov-Witten invariants of a
6-dimensional symplectic manifold relative to a Lagrangian submanifold. We
address the correspondence between the Chern-Simons free energy of S^3 with no
link and the Gromov-Witten invariant of the resolved conifold in great detail.
This case avoids mathematical difficulties in formulating a definition of
relative Gromov-Witten invariants, but includes all of the important ideas.
There is a vast amount of background material related to this duality. We make
a point of collecting all of the background material required to check this
duality in the case of the 3-sphere, and we have tried to present the material
in a way complementary to the existing literature. This paper contains a large
section on Gromov-Witten theory and a large section on quantum invariants of
3-manifolds. It also includes some physical motivation, but for the most part
it avoids physical terminology.Comment: This is the version published by Geometry & Topology Monographs on 21
September 200
Learning and understanding in abstract algebra
Students\u27 learning and understanding in an undergraduate abstract algebra class were described using Tall and Vinner\u27s notion of a concept image, which is the entire cognitive structure associated with a concept, including examples, nonexamples, definitions, representations, and results. Prominent features and components of students\u27 concept images were identified for concepts of elementary group theory, including group, subgroup, isomorphism, coset, and quotient group.
Analysis of interviews and written work from five students provided insight into their concept images, revealing ways they understood the concepts. Because many issues were related to students\u27 uses of language and notation, the analysis was essentially semiotic, using the linguistic, notational, and representational distinctions that the students made to infer their conceptual understandings and the distinctions they were and were not making among concepts. Attempting to explain and synthesize the results of the analysis became a process of theory generation, from which two themes emerged: making distinctions and managing abstraction.
The students often made nonstandard linguistic and notational distinctions. For example, some students used the term coset to describe not only individual cosets but also the set of all cosets. This kind of understanding was characterized as being immersed in the process of generating all of the cosets of a subgroup, a characterization that described and explained several instances of the phenomenon of failing to distinguish between a set and its elements.
The students managed their relationships with abstract ideas through metaphor, process and object conceptions, and proficiency with concepts, examples, and representations. For example, some students understood a particular group by relying upon its operation table, which they sometimes took to be the group itself rather than a representation. The operation table supported an object conception even when a student had a fragile understanding of the processes used in forming the group.
Making distinctions and managing abstraction are elaborated as fundamental characteristics of mathematical activity. Mathematics thereby becomes a dialectic between precision and abstraction, between logic and intuition, which has important implications for teaching, teacher education, and research
Probabilistic Arguments in Mathematics
This thesis addresses a question that emerges naturally from some observations about contemporary mathematical practice. Firstly, mathematicians always demand proof for the acceptance of new results. Secondly, the ability of mathematicians to tell if a discourse gives expression to a proof is less than perfect, and the computers they use are subject to a variety of hardware and software failures. So false results are sometimes accepted, despite insistence on proof. Thirdly, over the past few decades, researchers have also developed a variety of methods that are probabilistic in nature. Even if carried out perfectly, these procedures only yield a conclusion that is very likely to be true. In some cases, these chances of error are precisely specifiable and can be made as small as desired. The likelihood of an error arising from the inherently uncertain nature of these probabilistic algorithms can therefore be made vanishingly small in comparison to the chances of an error arising when implementing an equivalent deductive algorithm. Moreover, the structure of probabilistic algorithms tends to minimise these Implementation Errors too. So overall, probabilistic methods are sometimes more reliable than deductive ones. This invites the question: ‘Are mathematicians rational in continuing to reject these probabilistic methods as a means of establishing mathematical claims?