10,399 research outputs found

    Notes on Feynman Integrals and Renormalization

    Full text link
    I review various aspects of Feynman integrals, regularization and renormalization. Following Bloch, I focus on a linear algebraic approach to the Feynman rules, and I try to bring together several renormalization methods found in the literature from a unifying point of view, using resolutions of singularities. In the second part of the paper, I briefly sketch the work of Belkale, Brosnan resp. Bloch, Esnault and Kreimer on the motivic nature of Feynman integrals.Comment: 39

    Quantum Spacetime: a Disambiguation

    Full text link
    We review an approach to non-commutative geometry, where models are constructed by quantisation of the coordinates. In particular we focus on the full DFR model and its irreducible components; the (arbitrary) restriction to a particular irreducible component is often referred to as the "canonical quantum spacetime". The aim is to distinguish and compare the approaches under various points of view, including motivations, prescriptions for quantisation, the choice of mathematical objects and concepts, approaches to dynamics and to covariance.Comment: special issue of SIGMA "Noncommutative Spaces and Fields

    Modeling and Analysis of Software Product Line Variability in Clafer

    Get PDF
    Both feature and class modeling are used in Software Product Line (SPL) engineering to model variability. Feature models are used primarily to represent user-visible characteristics (i.e., features) of products; whereas class models are often used to model types of components and connectors in a product-line architecture. Previous works have explored the approach of using a single language to express both configurations of features and components. Their goal was to simplify the definition and analysis of feature-to-component mappings and to allow modeling component options as features. A prominent example of this approach is cardinality-based feature modeling, which extends feature models with multiple instantiation and references to express component-like, replicated features. Another example is to support feature modeling in a class modeling language, such as UML or MOF, using their profiling mechanisms and a stylized use of composition. Both examples have notable drawbacks: cardinality-based feature modeling lacks a constraint language and a well-defined semantics; encoding feature models as class models and their evolution bring extra complexity. This dissertation presents Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. Clafer can express rich structural models augmented with complex constraints, i.e., domain, variability, component models, and meta-models. Clafer supports: (i) class-based meta-models, (ii) object models (with uncertainty, if needed), (iii) feature models with attributes and multiple instantiation, (iv) configurations of feature models, (v) mixtures of meta- and feature models and model templates, and (vi) first-order logic constraints. Clafer also makes it possible to arrange models into multiple specialization and extension layers via constraints and inheritance. On the other hand, in designing Clafer we wanted to create a language that builds upon as few concepts as possible, and is easy to learn. The language is supported by tools for SPL verification and optimization. We propose to unify basic modeling constructs into a single concept, called clafer. In other words, Clafer is not a hybrid language. We identify several key mechanisms allowing a class modeling language to express feature models concisely. We provide Clafer with a formal semantics built in a novel, structurally explicit way. As Clafer subsumes cardinality-based feature modeling with attributes, references, and constraints, we are the first to precisely define semantics of such models. We also explore the notion of partial instantiation that allows for modeling with uncertainty and variability. We show that Object-Oriented Modeling (OOM) languages with no direct support for partial instances can support them via class modeling, using subclassing and strengthening multiplicity constraints. We make the encoding of partial instances via subclassing precise and general. Clafer uses this encoding and pushes the idea even further: it provides a syntactic unification of types and (partial) instances via subclassing and redefinition. We evaluate Clafer analytically and experimentally. The analytical evaluation shows that Clafer can concisely express feature and meta-models via a uniform syntax and unified semantics. The experimental evaluation shows that: 1) Clafer can express a variety of realistic rich structural models with complex constraints, such as variability models, meta-models, model templates, and domain models; and 2) that useful analyses can be performed within seconds

    The Jones polynomial: quantum algorithms and applications in quantum complexity theory

    Full text link
    We analyze relationships between quantum computation and a family of generalizations of the Jones polynomial. Extending recent work by Aharonov et al., we give efficient quantum circuits for implementing the unitary Jones-Wenzl representations of the braid group. We use these to provide new quantum algorithms for approximately evaluating a family of specializations of the HOMFLYPT two-variable polynomial of trace closures of braids. We also give algorithms for approximating the Jones polynomial of a general class of closures of braids at roots of unity. Next we provide a self-contained proof of a result of Freedman et al. that any quantum computation can be replaced by an additive approximation of the Jones polynomial, evaluated at almost any primitive root of unity. Our proof encodes two-qubit unitaries into the rectangular representation of the eight-strand braid group. We then give QCMA-complete and PSPACE-complete problems which are based on braids. We conclude with direct proofs that evaluating the Jones polynomial of the plat closure at most primitive roots of unity is a #P-hard problem, while learning its most significant bit is PP-hard, circumventing the usual route through the Tutte polynomial and graph coloring.Comment: 34 pages. Substantial revision. Increased emphasis on HOMFLYPT, greatly simplified arguments and improved organizatio

    Regularity Properties and Pathologies of Position-Space Renormalization-Group Transformations

    Full text link
    We reconsider the conceptual foundations of the renormalization-group (RG) formalism, and prove some rigorous theorems on the regularity properties and possible pathologies of the RG map. Regarding regularity, we show that the RG map, defined on a suitable space of interactions (= formal Hamiltonians), is always single-valued and Lipschitz continuous on its domain of definition. This rules out a recently proposed scenario for the RG description of first-order phase transitions. On the pathological side, we make rigorous some arguments of Griffiths, Pearce and Israel, and prove in several cases that the renormalized measure is not a Gibbs measure for any reasonable interaction. This means that the RG map is ill-defined, and that the conventional RG description of first-order phase transitions is not universally valid. For decimation or Kadanoff transformations applied to the Ising model in dimension d3d \ge 3, these pathologies occur in a full neighborhood {β>β0,h<ϵ(β)}\{ \beta > \beta_0 ,\, |h| < \epsilon(\beta) \} of the low-temperature part of the first-order phase-transition surface. For block-averaging transformations applied to the Ising model in dimension d2d \ge 2, the pathologies occur at low temperatures for arbitrary magnetic-field strength. Pathologies may also occur in the critical region for Ising models in dimension d4d \ge 4. We discuss in detail the distinction between Gibbsian and non-Gibbsian measures, and give a rather complete catalogue of the known examples. Finally, we discuss the heuristic and numerical evidence on RG pathologies in the light of our rigorous theorems.Comment: 273 pages including 14 figures, Postscript, See also ftp.scri.fsu.edu:hep-lat/papers/9210/9210032.ps.

    Students’ Development of Geometric Reasoning About the Derivative of Complex-Valued Functions

    Get PDF
    The purpose of this study was to explore the nature of students’ reasoning about the derivative of a complex-valued function, and to study ways in which they developed this reasoning while working with Geometer’s Sketchpad (GSP). The participants in this study were four students from one undergraduate complex analysis class. The development of participants’ reasoning about the derivative of a complex-valued function was captured via video-recording and screen-capture software in a four-day interview sequence consisting of a two-hour-long interview each day. This reasoning was interpreted through the theoretical perspective of embodied cognition. The findings indicated that students manifested embodied reasoning through gesture and speech, through algebraic and geometric inscriptions, and through interaction with the physical environment and the virtual environment provided by GSP. The findings further indicated that students needed to advance their geometric reasoning about the derivative of a complex-valued function in three essential ways in order to reason geometrically about the derivative as a local linear approximation. First, with help from gesture and speech, they recognized that they did not know how to characterize a linear complex-valued function. Second, with help from algebraic and geometric inscriptions, they reasoned that a linear complex-valued function f(z) rotates and dilates every circle by the same amounts Arg(f^\u27 (z)) and |f^\u27 (z)|, respectively. Finally, through embodied reasoning in both the virtual and physical environments, students recognized the need to focus on how a complex-valued function rotates and dilates small circles only. These findings suggest that one approach to improving student learning about the derivative of a complex-valued function is to highlight these three geometric aspects of the derivative, and to offer students opportunities to reason about this geometry in embodied ways listed above

    Semantic Similarity of Spatial Scenes

    Get PDF
    The formalization of similarity in spatial information systems can unleash their functionality and contribute technology not only useful, but also desirable by broad groups of users. As a paradigm for information retrieval, similarity supersedes tedious querying techniques and unveils novel ways for user-system interaction by naturally supporting modalities such as speech and sketching. As a tool within the scope of a broader objective, it can facilitate such diverse tasks as data integration, landmark determination, and prediction making. This potential motivated the development of several similarity models within the geospatial and computer science communities. Despite the merit of these studies, their cognitive plausibility can be limited due to neglect of well-established psychological principles about properties and behaviors of similarity. Moreover, such approaches are typically guided by experience, intuition, and observation, thereby often relying on more narrow perspectives or restrictive assumptions that produce inflexible and incompatible measures. This thesis consolidates such fragmentary efforts and integrates them along with novel formalisms into a scalable, comprehensive, and cognitively-sensitive framework for similarity queries in spatial information systems. Three conceptually different similarity queries at the levels of attributes, objects, and scenes are distinguished. An analysis of the relationship between similarity and change provides a unifying basis for the approach and a theoretical foundation for measures satisfying important similarity properties such as asymmetry and context dependence. The classification of attributes into categories with common structural and cognitive characteristics drives the implementation of a small core of generic functions, able to perform any type of attribute value assessment. Appropriate techniques combine such atomic assessments to compute similarities at the object level and to handle more complex inquiries with multiple constraints. These techniques, along with a solid graph-theoretical methodology adapted to the particularities of the geospatial domain, provide the foundation for reasoning about scene similarity queries. Provisions are made so that all methods comply with major psychological findings about people’s perceptions of similarity. An experimental evaluation supplies the main result of this thesis, which separates psychological findings with a major impact on the results from those that can be safely incorporated into the framework through computationally simpler alternatives

    Modeling economic systems as locally-constructive sequential games

    Get PDF
    Real-world economies are open-ended dynamic systems consisting of heterogeneous interacting participants. Human participants are decision-makers who strategically take into account the past actions and potential future actions of other participants. All participants are forced to be locally constructive, meaning their actions at any given time must be based on their local states; and participant actions at any given time affect future local states. Taken together, these essential properties imply real-world economies are locally-constructive sequential games. This paper discusses a modeling approach, Agent-based Computational Economics, that permits researchers to study economic systems from this point of view. ACE modeling principles and objectives are first concisely presented and explained. The remainder of the paper then highlights challenging issues and edgier explorations that ACE researchers are currently pursuing
    corecore