51 research outputs found
"Magic" numbers in Smale's 7th problem
Smale's 7-th problem concerns N-point configurations on the 2-dim sphere
which minimize the logarithmic pair-energy V_0(r) = -ln r averaged over the
pairs in a configuration; here, r is the chordal distance between the points
forming a pair. More generally, V_0(r) may be replaced by the standardized
Riesz pair-energy V_s(r)= (r^{-s} -1)/s, which becomes - ln r in the limit s to
0, and the sphere may be replaced by other compact manifolds. This paper
inquires into the concavity of the map from the integers N>1 into the minimal
average standardized Riesz pair-energies v_s(N) of the N-point configurations
on the 2-sphere for various real s. It is known that v_s(N) is strictly
increasing for each real s, and for s<2 also bounded above, hence "overall
concave." It is (easily) proved that v_{-2}(N) is even locally strictly
concave, and that so is v_s(2n) for s<-2. By analyzing computer-experimental
data of putatively minimal average Riesz pair-energies v_s^x(N) for s in
{-1,0,1,2,3} and N in {2,...,200}, it is found that {v}_{-1}^x(N) is locally
strictly concave, while v_s^x(N) is not always locally strictly concave for s
in {0,1,2,3}: concavity defects occur whenever N in C^{x}_+(s) (an s-specific
empirical set of integers). It is found that the empirical map C^{x}_+(s), with
s in {-2,-1,0,1,2,3}, is set-theoretically increasing; moreover, the percentage
of odd numbers in C^{x}_+(s), s in {0,1,2,3}, is found to increase with s. The
integers in C^{x}_+(0) are few and far between, forming a curious sequence of
numbers, reminiscent of the "magic numbers" in nuclear physics. It is
conjectured that the "magic numbers" in Smale's 7-th problem are associated
with optimally symmetric optimal-energy configurations.Comment: 109 pages, of which 30 are numerical data tables. Thoroughly revised
version, to appear in J. Stat. Phys. under the different title: `Optimal N
point configurations on the sphere: "Magic" numbers and Smale's 7th problem
Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates
First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Schönfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Schönfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable.Untersuchungen der Logik erster Stufe blicken auf eine lange Tradition zurĂŒck. Es ist allgemein bekannt, dass das zugehörige ErfĂŒllbarkeitsproblem im Allgemeinen nicht algorithmisch gelöst werden kann - man spricht daher von einer unentscheidbaren Logik. Diese Beobachtung wirft ein Schlaglicht auf die prinzipiellen Grenzen der FĂ€higkeiten von Computern im Allgemeinen aber auch des automatischen SchlieĂens im Besonderen. Das Hilbertsche Entscheidungsproblem wird heute als die Erforschung der Grenze zwischen entscheidbaren und unentscheidbaren Teilen der Logik erster Stufe verstanden, wobei die untersuchten Fragmente der Logik mithilfe klar zu erfassender und berechenbarer syntaktischer Eigenschaften beschrieben werden. Viele Forscher haben bereits zu dieser Untersuchung beigetragen und zahlreiche entscheidbare und unentscheidbare Fragmente entdeckt und erforscht. Die vorliegende Dissertation setzt diese Tradition mit einer Reihe vornehmlich positiver Resultate fort und eröffnet neue Blickwinkel auf eine Reihe von Fragmenten, die im Laufe der letzten einhundert Jahre untersucht wurden. Im ersten Teil der Arbeit steht das syntaktische Konzept der Separiertheit von Variablen im Mittelpunkt, und dessen Anwendbarkeit auf das Entscheidungsproblem und darĂŒber hinaus wird erforscht. Zwei Mengen von Individuenvariablen gelten bezĂŒglich einer gegebenen Formel als separiert, falls in jedem Atom der Formel die Variablen aus höchstens einer der beiden Mengen vorkommen. Mithilfe dieses leicht verstĂ€ndlichen Begriffs lassen sich viele wohlbekannte entscheidbare Fragmente der Logik erster Stufe zu gröĂeren Klassen von Formeln erweitern, die dennoch entscheidbar sind. Dieser Ansatz wird fĂŒr neun Fragmente im Detail dargelegt, darunter mehrere PrĂ€fix-Fragmente, das Zwei-Variablen-Fragment und sogenannte "guarded" und " uted" Fragmente. Dabei stellt sich heraus, dass alle erweiterten Fragmente ebenfalls das monadische Fragment erster Stufe ohne Gleichheit enthalten. Obwohl die erweiterte Syntax in den betrachteten FĂ€llen nicht mit einer erhöhten AusdrucksstĂ€rke einhergeht, können bestimmte ZusammenhĂ€nge mithilfe der erweiterten Syntax deutlich kĂŒrzer formuliert werden. Zumindest in zwei FĂ€llen ist diese Diskrepanz nicht durch eine elementare Funktion zu beschrĂ€nken. Dies liefert einen ersten Hinweis darauf, dass die algorithmische Lösung des ErfĂŒllbarkeitsproblems fĂŒr die erweiterten Fragmente mit sehr hohem Rechenaufwand verbunden ist. TatsĂ€chlich wird eine nicht-elementare untere Schranke fĂŒr den entsprechenden Zeitbedarf beim sogenannten separierten Fragment, einer Erweiterung des bekannten Bernays-Schönfinkel-Ramsey-Fragments, abgeleitet. DarĂŒber hinaus wird der Ein uss der Separiertheit von Individuenvariablen auf der semantischen Ebene untersucht, wo AbhĂ€ngigkeiten zwischen quantifizierten Variablen durch deren Separiertheit stark abgeschwĂ€cht werden können. FĂŒr die genauere formale Betrachtung solcher als schwach bezeichneten AbhĂ€ngigkeiten wird auf sogenannte Hintikka-Spiele zurĂŒckgegriffen. Den Schwerpunkt des zweiten Teils der vorliegenden Arbeit bildet das Entscheidungsproblem fĂŒr die lineare Arithmetik ĂŒber den rationalen Zahlen in Verbindung mit uninterpretierten PrĂ€dikaten. Es werden zwei bislang unbekannte entscheidbare Fragmente dieser Sprache vorgestellt, die beide auf dem Bernays-Schönfinkel-Ramsey-Fragment aufbauen. Ferner werden neue negative Resultate entwickelt und mehrere unentscheidbare Fragmente vorgestellt, die lediglich einen sehr eingeschrĂ€nkten Teil der Sprache benötigen
Recommended from our members
Numerical methods for the interpolation and approximation of data by spline functions
It is often important in practice to obtain approximate representations of physical data by relatively simple mathematical functions. The approximating functions are usually required to meet certain criteria relating to accuracy and smoothness. In the past, polynomials have frequently been used for this task, but it has long been recognised that there are many types of data set for which polynomial approximations are unsatisfactory in that a very high degree may be required to achieve the required accuracy. Moreover, even if such a polynomial can be computed, it frequently tends to exhibit spurious oscillations not present in the data itself.
In an attempt to overcome these difficulties attention has turned in recent years to the use of piecewise polynomials or spline functions. A spline function, or simply a spline, is composed of a set of polynomial arcs, usually of low degree, joined end to end in such a way as to form a smooth function. Splines tend to have greater flexibility than polynomials in the approximation of physical data and much attention has been devoted in the last decade to the theory of splines. The development of robust numerical methods for computing with splines lies, however, lagged somewhat behind the theory. The main objective of this work is the construction and analysis of such methods. In order to obtain efficient and stable methods a representation of splines that is well-conditioned and that results in fast computational schemes is required. Representations in terms of B-splines prove to be eminently suitable and accordingly we study B-splines in some detain and give various algorithms for calculations in which they are involved.
When B-splines arc used as a basis for interpolation or least-squares data fitting the resulting linear algebraic systems to be solved for the spline coefficients have a special structure. Stable numerical methods that exploit this structure to the full are presented.
Our algorithms are used to obtain spline approximations to a variety of data sets drawn from practical applications. Their performance on these problems illustrates the power of splines over more conventional approximating functions
Errata and Addenda to Mathematical Constants
We humbly and briefly offer corrections and supplements to Mathematical
Constants (2003) and Mathematical Constants II (2019), both published by
Cambridge University Press. Comments are always welcome.Comment: 162 page
- âŠ