2,536 research outputs found
The Computational Complexity of Propositional Cirquent Calculus
Introduced in 2006 by Japaridze, cirquent calculus is a refinement of sequent
calculus. The advent of cirquent calculus arose from the need for a deductive
system with a more explicit ability to reason about resources. Unlike the more
traditional proof-theoretic approaches that manipulate tree-like objects
(formulas, sequents, etc.), cirquent calculus is based on circuit-style
structures called cirquents, in which different "peer" (sibling, cousin, etc.)
substructures may share components. It is this resource sharing mechanism to
which cirquent calculus owes its novelty (and its virtues). From its inception,
cirquent calculus has been paired with an abstract resource semantics. This
semantics allows for reasoning about the interaction between a resource
provider and a resource user, where resources are understood in the their most
general and intuitive sense. Interpreting resources in a more restricted
computational sense has made cirquent calculus instrumental in axiomatizing
various fundamental fragments of Computability Logic, a formal theory of
(interactive) computability. The so-called "classical" rules of cirquent
calculus, in the absence of the particularly troublesome contraction rule,
produce a sound and complete system CL5 for Computability Logic. In this paper,
we investigate the computational complexity of CL5, showing it is
-complete. We also show that CL5 without the duplication rule has
polynomial size proofs and is NP-complete
Philosophy of Computer Science: An Introductory Course
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments
Tourism and Simulacrum: The Computational Economy of Algorithmic Destinations
AbstractThe paper establishes a conceptual and methodological link between destinations and simulacrum through gamified tourism. As a paradigm, gamified tourism provides a rationale and a setting within which to apply computational economics to tourism, an approach amounting to tourism computability. Algorithmic destinations serve as “petri dishes” for real destinations. Utilizing rule sets that embody destination growth dynamics and visitor behavioural norms, seeding points in a cellular automata model (CA) were grown into algorithmic destinations. This is followed by a morphological transformation of geo-tagged satellite images into spatial points. The overlap of this additive and subtractive approach is at the core of tourism computability. Finally, the spatio-temporal dynamics of economic resilience was traced out through a visual phenomenology of algorithmic destinations. The gamification of tourism should be embraced as it holds up a flicker of hope for mature destinations, amidst the onset of museumification and increased commoditization of heritage sites. Gamification is treated as part of the reflexive cycle for destination authenticity; a notion that that Cohen (1988) alluded to in his discussion of emergent authenticity in destination image formation. Seen in this light, the museumification of Venice and the proliferation of its simulacrum, such as the Venetian Hotel in Macao and Venice-themed hotels across the globe, are prefigures and archetypes of a glorious age of gamified tourism
Polynomial Linear Programming with Gaussian Belief Propagation
Interior-point methods are state-of-the-art algorithms for solving linear
programming (LP) problems with polynomial complexity. Specifically, the
Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where
is the number of unknown variables. Karmarkar's celebrated algorithm is known
to be an instance of the log-barrier method using the Newton iteration. The
main computational overhead of this method is in inverting the Hessian matrix
of the Newton iteration. In this contribution, we propose the application of
the Gaussian belief propagation (GaBP) algorithm as part of an efficient and
distributed LP solver that exploits the sparse and symmetric structure of the
Hessian matrix and avoids the need for direct matrix inversion. This approach
shifts the computation from realm of linear algebra to that of probabilistic
inference on graphical models, thus applying GaBP as an efficient inference
engine. Our construction is general and can be used for any interior-point
algorithm which uses the Newton method, including non-linear program solvers.Comment: 7 pages, 1 figure, appeared in the 46th Annual Allerton Conference on
Communication, Control and Computing, Allerton House, Illinois, Sept. 200
Freedom, Anarchy and Conformism in Academic Research
In this paper I attempt to make a case for promoting the courage of rebels within the citadels of orthodoxy in academic research environments. Wicksell in Macroeconomics, Brouwer in the Foundations of Mathematics, Turing in Computability Theory, Sraffa in the Theories of Value and Distribution are, in my own fields of research, paradigmatic examples of rebels, adventurers and non-conformists of the highest caliber in scientific research within University environments. In what sense, and how, can such rebels, adventurers and non-conformists be fostered in the current University research environment dominated by the cult of 'picking winners'? This is the motivational question lying behind the historical outlines of the work of Brouwer, Hilbert, Bishop, Veronese, Gödel, Turing and Sraffa that I describe in this paper. The debate between freedom in research and teaching, and the naked imposition of 'correct' thinking, on potential dissenters of the mind, is of serious concern in this age of austerity of material facilities. It is a debate that has occupied some of the finest minds working at the deepest levels of foundational issues in mathematics, metamathematics and economic theory. By making some of the issues explicit, I hope it is possible to encourage dissenters to remain courageous in the face of current dogmasNon-conformist research, economic theory, mathematical economics, 'Hilbert's Dogma', Hilbert's Program, computability theory
Disaster Recovery Services in Intercloud using Genetic Algorithm Load Balancer
Paradigm need to shifts from cloud computing to intercloud for disaster recoveries, which can outbreak anytime and anywhere. Natural disaster treatment includes radically high voluminous impatient job request demanding immediate attention. Under the disequilibrium circumstance, intercloud is more practical and functional option. There are need of protocols like quality of services, service level agreement and disaster recovery pacts to be discussed and clarified during the initial setup to fast track the distress scenario. Orchestration of resources in large scale distributed system having muli-objective optimization of resources, minimum energy consumption, maximum throughput, load balancing, minimum carbon footprint altogether is quite challenging. Intercloud where resources of different clouds are in align, plays crucial role in resource mapping. The objective of this paper is to improvise and fast track the mapping procedures in cloud platform and addressing impatient job requests in balanced and efficient manner. Genetic algorithm based resource allocation is proposed using pareto optimal mapping of resources to keep high utilization rate of processors, high througput and low carbon footprint. Decision variables include utilization of processors, throughput, locality cost and real time deadline. Simulation results of load balancer using first in first out and genetic algorithm are compared under similar circumstances
- …