39,130 research outputs found

    Two Sets of Simple Formulae to Estimating Fractal Dimension of Irregular Boundaries

    Full text link
    Irregular boundary lines can be characterized by fractal dimension, which provides important information for spatial analysis of complex geographical phenomena such as cities. However, it is difficult to calculate fractal dimension of boundaries systematically when image data is limited. An approximation estimation formulae of boundary dimension based on square is widely applied in urban and ecological studies. However, the boundary dimension is sometimes overestimated. This paper is devoted to developing a series of practicable formulae for boundary dimension estimation using ideas from fractals. A number of regular figures are employed as reference shapes, from which the corresponding geometric measure relations are constructed; from these measure relations, two sets of fractal dimension estimation formulae are derived for describing fractal-like boundaries. Correspondingly, a group of shape indexes can be defined. A finding is that different formulae have different merits and spheres of application, and the second set of boundary dimensions is a function of the shape indexes. Under condition of data shortage, these formulae can be utilized to estimate boundary dimension values rapidly. Moreover, the relationships between boundary dimension and shape indexes are instructive to understand the association and differences between characteristic scales and scaling. The formulae may be useful for the pre-fractal studies in geography, geomorphology, ecology, landscape science, and especially, urban science.Comment: 28 pages, 2 figures, 9 table

    Equivalence-Checking on Infinite-State Systems: Techniques and Results

    Full text link
    The paper presents a selection of recently developed and/or used techniques for equivalence-checking on infinite-state systems, and an up-to-date overview of existing results (as of September 2004)

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Measuring the Pro-Activity of Software Agents

    Get PDF
    Despite having well-defined characteristics, software agents do not have a developed set of measures defining their quality. Attempts at evaluating software agent quality have focused on some agent aspects, like the development process, whereas others focusing on the agent as a software product have basically adopted measures associated with other software paradigms, like procedural and object-oriented concepts. Here we propose a set of measures for evaluating software agent pro-activity, the software agent's goal-driven behavioral ability to take the initiative and satisfy its goal

    What does social semiotics have to offer mathematics education research?

    Get PDF

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    Incremental Recompilation of Knowledge

    Full text link
    Approximating a general formula from above and below by Horn formulas (its Horn envelope and Horn core, respectively) was proposed by Selman and Kautz (1991, 1996) as a form of ``knowledge compilation,'' supporting rapid approximate reasoning; on the negative side, this scheme is static in that it supports no updates, and has certain complexity drawbacks pointed out by Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many frameworks and schemes proposed in the literature for theory update and revision are plagued by serious complexity-theoretic impediments, even in the Horn case, as was pointed out by Eiter and Gottlob (1992), and is further demonstrated in the present paper. More fundamentally, these schemes are not inductive, in that they may lose in a single update any positive properties of the represented sets of formulas (small size, Horn structure, etc.). In this paper we propose a new scheme, incremental recompilation, which combines Horn approximation and model-based updates; this scheme is inductive and very efficient, free of the problems facing its constituents. A set of formulas is represented by an upper and lower Horn approximation. To update, we replace the upper Horn formula by the Horn envelope of its minimum-change update, and similarly the lower one by the Horn core of its update; the key fact which enables this scheme is that Horn envelopes and cores are easy to compute when the underlying formula is the result of a minimum-change update of a Horn formula by a clause. We conjecture that efficient algorithms are possible for more complex updates.Comment: See http://www.jair.org/ for any accompanying file
    • 

    corecore