212,615 research outputs found
Foundations of Inference
We present a simple and clear foundation for finite inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying lattices of logical statements in a way that satisfies general lattice symmetries. With other applications such as measure theory in mind, our derivations assume minimal symmetries, relying on neither negation nor continuity nor differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of quantifying rules that form the familiar probability calculus. We also derive a unique quantification of divergence, entropy and information
A model for Smarandache's Anti-Geometry
David Hilbert's Foundations of Geometry (1899) contain nineteen statements,
labelled axioms, from which every theorem in Euclid's Elements can be derived by
deductive inference, according to the classical rules of logic
Mill on logic
Working within the broad lines of general consensus that mark out the core features of John Stuart Mill’s (1806–1873) logic, as set forth in his A System of Logic (1843–1872), this chapter provides an introduction to Mill’s logical theory by reviewing his position on the relationship between induction and deduction, and the role of general premises and principles in reasoning. Locating induction, understood as a kind of analogical reasoning from particulars to particulars, as the basic form of inference that is both free-standing and the sole load-bearing structure in Mill’s logic, the foundations of Mill’s logical system are briefly inspected. Several naturalistic features are identified, including its subject matter, human reasoning, its empiricism, which requires that only particular, experiential claims can function as basic reasons, and its ultimate foundations in ‘spontaneous’ inference. The chapter concludes by comparing Mill’s naturalized logic to Russell’s (1907) regressive method for identifying the premises of mathematics
Community detection and graph partitioning
Many methods have been proposed for community detection in networks. Some of
the most promising are methods based on statistical inference, which rest on
solid mathematical foundations and return excellent results in practice. In
this paper we show that two of the most widely used inference methods can be
mapped directly onto versions of the standard minimum-cut graph partitioning
problem, which allows us to apply any of the many well-understood partitioning
algorithms to the solution of community detection problems. We illustrate the
approach by adapting the Laplacian spectral partitioning method to perform
community inference, testing the resulting algorithm on a range of examples,
including computer-generated and real-world networks. Both the quality of the
results and the running time rival the best previous methods.Comment: 5 pages, 2 figure
CSM-397 - The Foundations of Specification II
In the Foundations of Specification I, we developed a Specification Theory (CST) to serve as a vehicle to explore the mathematical foundations of specification. A further aspect of this concerns type inference. In this paper we develop and explore a type inference system for CST and use it to illustrate the foundational issues which arise with such systems
- …