35 research outputs found

    Graph Theory

    Get PDF
    [no abstract available

    Field theoretic formulation and empirical tracking of spatial processes

    Get PDF
    Spatial processes are attacked on two fronts. On the one hand, tools from theoretical and statistical physics can be used to understand behaviour in complex, spatially-extended multi-body systems. On the other hand, computer vision and statistical analysis can be used to study 4D microscopy data to observe and understand real spatial processes in vivo. On the rst of these fronts, analytical models are developed for abstract processes, which can be simulated on graphs and lattices before considering real-world applications in elds such as biology, epidemiology or ecology. In the eld theoretic formulation of spatial processes, techniques originating in quantum eld theory such as canonical quantisation and the renormalization group are applied to reaction-di usion processes by analogy. These techniques are combined in the study of critical phenomena or critical dynamics. At this level, one is often interested in the scaling behaviour; how the correlation functions scale for di erent dimensions in geometric space. This can lead to a better understanding of how macroscopic patterns relate to microscopic interactions. In this vein, the trace of a branching random walk on various graphs is studied. In the thesis, a distinctly abstract approach is emphasised in order to support an algorithmic approach to parts of the formalism. A model of self-organised criticality, the Abelian sandpile model, is also considered. By exploiting a bijection between recurrent con gurations and spanning trees, an e cient Monte Carlo algorithm is developed to simulate sandpile processes on large lattices. On the second front, two case studies are considered; migratory patterns of leukaemia cells and mitotic events in Arabidopsis roots. In the rst case, tools from statistical physics are used to study the spatial dynamics of di erent leukaemia cell lineages before and after a treatment. One key result is that we can discriminate between migratory patterns in response to treatment, classifying cell motility in terms of sup/super/di usive regimes. For the second case study, a novel algorithm is developed to processes a 4D light-sheet microscopy dataset. The combination of transient uorescent markers and a poorly localised specimen in the eld of view leads to a challenging tracking problem. A fuzzy registration-tracking algorithm is developed to track mitotic events so as to understand their spatiotemporal dynamics under normal conditions and after tissue damage.Open Acces

    The Strong Perfect Graph Conjecture: 40 years of Attempts, and its Resolution

    Get PDF
    International audienceThe Strong Perfect Graph Conjecture (SPGC) was certainly one of the most challenging conjectures in graph theory. During more than four decades, numerous attempts were made to solve it, by combinatorial methods, by linear algebraic methods, or by polyhedral methods. The first of these three approaches yielded the first (and to date only) proof of the SPGC; the other two remain promising to consider in attempting an alternative proof. This paper is an unbalanced survey of the attempts to solve the SPGC; unbalanced, because (1) we devote a signicant part of it to the 'primitive graphs and structural faults' paradigm which led to the Strong Perfect Graph Theorem (SPGT); (2) we briefly present the other "direct" attempts, that is, the ones for which results exist showing one (possible) way to the proof; (3) we ignore entirely the "indirect" approaches whose aim was to get more information about the properties and structure of perfect graphs, without a direct impact on the SPGC. Our aim in this paper is to trace the path that led to the proof of the SPGT as completely as possible. Of course, this implies large overlaps with the recent book on perfect graphs [J.L. Ramirez-Alfonsin and B.A. Reed, eds., Perfect Graphs (Wiley & Sons, 2001).], but it also implies a deeper analysis (with additional results) and another viewpoint on the topic

    Acta Scientiarum Mathematicarum : Tomus 55. Fasc. 1-2.

    Get PDF

    Mixing graph colourings

    Get PDF
    This thesis investigates some problems related to graph colouring, or, more precisely, graph re-colouring. Informally, the basic question addressed can be phrased as follows. Suppose one is given a graph G whose vertices can be properly k-coloured, for some k ≄ 2. Is it possible to transform any k-colouring of G into any other by recolouring vertices of G one at a time, making sure a proper k-colouring of G is always maintained? If the answer is in the affirmative, G is said to be k-mixing. The related problem of deciding whether, given two k-colourings of G, it is possible to transform one into the other by recolouring vertices one at a time, always maintaining a proper k-colouring of G, is also considered. These questions can be considered as having a bearing on certain mathematical and ‘real-world’ problems. In particular, being able to recolour any colouring of a given graph to any other colouring is a necessary pre-requisite for the method of sampling colourings known as Glauber dynamics. The results presented in this thesis may also find application in the context of frequency reassignment: given that the problem of assigning radio frequencies in a wireless communications network is often modelled as a graph colouring problem, the task of re-assigning frequencies in such a network can be thought of as a graph recolouring problem. Throughout the thesis, the emphasis is on the algorithmic aspects and the computational complexity of the questions described above. In other words, how easily, in terms of computational resources used, can they be answered? Strong results are obtained for the k = 3 case of the first question, where a characterisation theorem for 3-mixing graphs is given. For the second question, a dichotomy theorem for the complexity of the problem is proved: the problem is solvable in polynomial time for k ≀ 3 and PSPACE-complete for k ≄ 4. In addition, the possible length of a shortest sequence of recolourings between two colourings is investigated, and an interesting connection between the tractability of the problem and its underlying structure is established. Some variants of the above problems are also explored

    StudentsÂŽ language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in studentsÂŽ input to a (simulated) proof tutoring system is conducted and the variety of studentsÂŽ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora

    New Foundation in the Sciences: Physics without sweeping infinities under the rug

    Get PDF
    It is widely known among the Frontiers of physics, that “sweeping under the rug” practice has been quite the norm rather than exception. In other words, the leading paradigms have strong tendency to be hailed as the only game in town. For example, renormalization group theory was hailed as cure in order to solve infinity problem in QED theory. For instance, a quote from Richard Feynman goes as follows: “What the three Nobel Prize winners did, in the words of Feynman, was to get rid of the infinities in the calculations. The infinities are still there, but now they can be skirted around . . . We have designed a method for sweeping them under the rug. [1] And Paul Dirac himself also wrote with similar tune: “Hence most physicists are very satisfied with the situation. They say: Quantum electrodynamics is a good theory, and we do not have to worry about it any more. I must say that I am very dissatisfied with the situation, because this so-called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small—not neglecting it just because it is infinitely great and you do not want it!”[2] Similarly, dark matter and dark energy were elevated as plausible way to solve the crisis in prevalent Big Bang cosmology. That is why we choose a theme here: New Foundations in the Sciences, in order to emphasize the necessity to introduce a new set of approaches in the Sciences, be it Physics, Cosmology, Consciousness etc

    Eigenvalues and low energy eigenvectors of quantum many-body systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 211-221).I first give an overview of the thesis and Matrix Product States (MPS) representation of quantum spin systems on a line with an improvement on the notation. The rest of this thesis is divided into two parts. The first part is devoted to eigenvalues of quantum many-body systems (QMBS). I introduce Isotropic Entanglement (IE) and show that the distribution of QMBS with generic interactions can be accurately obtained using IE. Next, I discuss the eigenvalue distribution of one particle hopping random Schrbdinger operator in one dimension from free probability theory in context of the Anderson model. The second part is devoted to ground states and gap of QMBS. I first give the necessary background on frustration free Hamiltonians, real and imaginary time evolution of quantum spin systems on a line within MPS representation and the numerical implementation. I then prove the degeneracy and unfrustration condition for quantum spin chains with generic local interactions. Following this, I summarize my efforts in proving lower bounds for the entanglement of the ground states, which includes partial results, with the hope that it will inspire future work resulting in solving the conjecture given. Next I discuss two interesting measure zero examples where the Hamiltonians are carefully constructed to give unique ground states with high entanglement. This includes exact calculations of Schmidt numbers, entanglement entropies and a novel technique for calculating the gap. The last chapter elaborates on one of the measure zero examples (i.e., d = 3) which is the first example of a Frustration Free translation-invariant spin-i chain that has a unique highly entangled ground state and exhibits signatures of a critical behavior.by Ramis Movassagh.Ph.D

    Simulating the interaction of galaxies and the intergalactic medium

    Get PDF
    The co-evolution of galaxies and the intergalactic medium as a function of environment is studied using hydrodynamic simulations of the ACDM cosmogony. It is demonstrated with non-radiative calculations that, in the absence of non-gravitational mechanisms, dark matter haloes accrete a near-universal fraction (~ 0.9Ω(_b)/ Ω (_m))of baryons. The absence of a mass or redshift dependence of this fraction augurs well for parameter tests that use X-ray clusters as cosmological probes. Moreover, this result indicates that non- gravitational processes must efficiently regulate the formation of stars in dark matter haloes if the halo mass function is to be reconciled with the observed galaxy luminosity function. Simulations featuring stellar evolution and non-gravitational feedback mechanisms (photo-heating by the ultraviolet background, and thermal and kinetic supemovae feedback) are used to follow the evolution of star formation, and the thermo- and chemo- dynamical evolution of baryons. The observed star formation history of the Universe is reproduced, except at low redshift where it is overestimated by a factor of a few, possibly indicating the need for feedback from active galactic nuclei to quench cooling flows around massive galaxies. The simulations more accurately reproduce the observed abundance of galaxies with late-type morphologies than has been reported elsewhere. The unique initial conditions of these simulations, based on the Millennium Simulation, allow an unprecedented study of the role of large-scale environment to be conducted. The cosmic star formation rate density is found to vary by an order of magnitude across the extremes of environment expected in the local Universe. The mass fraction of baryons in the observationally elusive warm-hot intergalactic medium (WHIM), and the volume filling factor that this gas occupies, is also shown to vary by a factor of a few across such environments. This variation is attributed to differences in the halo mass functions of the environments. Finally, we compare the X-ray properties of haloes from the simulations with the predictions of the White and Frenk (1991) analytic galaxy formation model, and demonstrate that deviations from the analytic prediction arise from the assumptions i) that haloes retain their cosmic share of baryons, and ii) their gas follows an isothermal density profile. The simulations indicate that a significant fraction of gas is ejected from low mass haloes by galactic superwinds, leading to a significant increase in their cooling time profiles and an associated drop in their soft X-ray luminosities, relative to the analytic model. Simulated X-ray luminosities remain greater than present observational upper limits, but it is argued that the observations provide only weak constraints and may suffer from a systematic bias, such that the mass of the halo hosting a given galaxy is overestimated. This bias also follows from the assumption that haloes exhibit isothermal density profiles
    corecore