1,180 research outputs found

    Structural matching by discrete relaxation

    Get PDF
    This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter

    Structural graph matching using the EM algorithm and singular value decomposition

    Get PDF
    This paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions: 1) commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm; and 2) we cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows one to efficiently recover correspondence matches using the singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding method

    An Adynamical, Graphical Approach to Quantum Gravity and Unification

    Full text link
    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity and unification. Our proposed reconciliation of general relativity and quantum field theory is based on a modification of their graphical instantiations, i.e., Regge calculus and lattice gauge theory, respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of quantum field theory) called a "spacetimesource element." These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. We use this approach via modified Regge calculus to correct proper distance in the Einstein-deSitter cosmology model yielding a fit of the Union2 Compilation supernova data that matches LambdaCDM without having to invoke accelerating expansion or dark energy. A similar modification to lattice gauge theory results in an adynamical account of quantum interference.Comment: 47 pages text, 14 figures, revised per recent results, e.g., dark energy result

    Graph matching using position coordinates and local features for image analysis

    Get PDF
    Encontrar las correspondencias entre dos imágenes es un problema crucial en el campo de la visión por ordenador i el reconocimiento de patrones. Es relevante para un amplio rango de propósitos des de aplicaciones de reconocimiento de objetos en las áreas de biometría, análisis de documentos i análisis de formas hasta aplicaciones relacionadas con la geometría desde múltiples puntos de vista tales cómo la recuperación de la pose, estructura desde el movimiento y localización y mapeo. La mayoría de las técnicas existentes enfocan este problema o bien usando características locales en la imagen o bien usando métodos de registro de conjuntos de puntos (o bien una mezcla de ambos). En las primeras, un conjunto disperso de características es primeramente extraído de las imágenes y luego caracterizado en la forma de vectores descriptores usando evidencias locales de la imagen. Las características son asociadas según la similitud entre sus descriptores. En las segundas, los conjuntos de características son considerados cómo conjuntos de puntos los cuales son asociados usando técnicas de optimización no lineal. Estos son procedimientos iterativos que estiman los parámetros de correspondencia y de alineamiento en pasos alternados. Los grafos son representaciones que contemplan relaciones binarias entre las características. Tener en cuenta relaciones binarias al problema de la correspondencia a menudo lleva al llamado problema del emparejamiento de grafos. Existe cierta cantidad de métodos en la literatura destinados a encontrar soluciones aproximadas a diferentes instancias del problema de emparejamiento de grafos, que en la mayoría de casos es del tipo "NP-hard". El cuerpo de trabajo principal de esta tesis está dedicado a formular ambos problemas de asociación de características de imagen y registro de conjunto de puntos como instancias del problema de emparejamiento de grafos. En todos los casos proponemos algoritmos aproximados para solucionar estos problemas y nos comparamos con un número de métodos existentes pertenecientes a diferentes áreas como eliminadores de "outliers", métodos de registro de conjuntos de puntos y otros métodos de emparejamiento de grafos. Los experimentos muestran que en la mayoría de casos los métodos propuestos superan al resto. En ocasiones los métodos propuestos o bien comparten el mejor rendimiento con algún método competidor o bien obtienen resultados ligeramente peores. En estos casos, los métodos propuestos normalmente presentan tiempos computacionales inferiores.Trobar les correspondències entre dues imatges és un problema crucial en el camp de la visió per ordinador i el reconeixement de patrons. És rellevant per un ampli ventall de propòsits des d’aplicacions de reconeixement d’objectes en les àrees de biometria, anàlisi de documents i anàlisi de formes fins aplicacions relacionades amb geometria des de múltiples punts de vista tals com recuperació de pose, estructura des del moviment i localització i mapeig. La majoria de les tècniques existents enfoquen aquest problema o bé usant característiques locals a la imatge o bé usant mètodes de registre de conjunts de punts (o bé una mescla d’ambdós). En les primeres, un conjunt dispers de característiques és primerament extret de les imatges i després caracteritzat en la forma de vectors descriptors usant evidències locals de la imatge. Les característiques son associades segons la similitud entre els seus descriptors. En les segones, els conjunts de característiques son considerats com conjunts de punts els quals son associats usant tècniques d’optimització no lineal. Aquests son procediments iteratius que estimen els paràmetres de correspondència i d’alineament en passos alternats. Els grafs son representacions que contemplen relacions binaries entre les característiques. Tenir en compte relacions binàries al problema de la correspondència sovint porta a l’anomenat problema de l’emparellament de grafs. Existeix certa quantitat de mètodes a la literatura destinats a trobar solucions aproximades a diferents instàncies del problema d’emparellament de grafs, el qual en la majoria de casos és del tipus “NP-hard”. Una part del nostre treball està dedicat a investigar els beneficis de les mesures de ``bins'' creuats per a la comparació de característiques locals de les imatges. La resta està dedicat a formular ambdós problemes d’associació de característiques d’imatge i registre de conjunt de punts com a instàncies del problema d’emparellament de grafs. En tots els casos proposem algoritmes aproximats per solucionar aquests problemes i ens comparem amb un nombre de mètodes existents pertanyents a diferents àrees com eliminadors d’“outliers”, mètodes de registre de conjunts de punts i altres mètodes d’emparellament de grafs. Els experiments mostren que en la majoria de casos els mètodes proposats superen a la resta. En ocasions els mètodes proposats o bé comparteixen el millor rendiment amb algun mètode competidor o bé obtenen resultats lleugerament pitjors. En aquests casos, els mètodes proposats normalment presenten temps computacionals inferiors

    A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse.

    Get PDF
    The work presented in this thesis is principally concerned with the development of a method and set of tools designed to support the identification of class-based similarity in collections of object-oriented code. Attention is focused on enhancing the potential for software reuse in situations where a reuse process is either absent or informal, and the characteristics of the organisation are unsuitable, or resources unavailable, to promote and sustain a systematic approach to reuse. The approach builds on the definition of a formal, attributed, relational model that captures the inherent structure of class-based, object-oriented code. Based on code-level analysis, it relies solely on the structural characteristics of the code and the peculiarly object-oriented features of the class as an organising principle: classes, those entities comprising a class, and the intra and inter-class relationships existing between them, are significant factors in defining a two-phase similarity measure as a basis for the comparison process. Established graph-theoretic techniques are adapted and applied via this model to the problem of determining similarity between classes. This thesis illustrates a successful transfer of techniques from the domains of molecular chemistry and computer vision. Both domains provide an existing template for the analysis and comparison of structures as graphs. The inspiration for representing classes as attributed relational graphs, and the application of graph-theoretic techniques and algorithms to their comparison, arose out of a well-founded intuition that a common basis in graph-theory was sufficient to enable a reasonable transfer of these techniques to the problem of determining similarity in object-oriented code. The practical application of this work relates to the identification and indexing of instances of recurring, class-based, common structure present in established and evolving collections of object-oriented code. A classification so generated additionally provides a framework for class-based matching over an existing code-base, both from the perspective of newly introduced classes, and search "templates" provided by those incomplete, iteratively constructed and refined classes associated with current and on-going development. The tools and techniques developed here provide support for enabling and improving shared awareness of reuse opportunity, based on analysing structural similarity in past and ongoing development, tools and techniques that can in turn be seen as part of a process of domain analysis, capable of stimulating the evolution of a systematic reuse ethic

    Univariate Versus Multivariate Modeling of Panel Data: Model Specification and Goodness-of-Fit Testing

    Get PDF
    Two approaches are commonly in use for analyzing panel data: the univariate, which arranges data in long format and estimates just one regression equation; and the multivariate, which arranges data in wide format, and simultaneously estimates a set of regression equations. Although technical articles relating the two approaches exist, they do not seem to have had an impact in organizational research. This article revisits the connection between the univariate and multivariate approaches, elucidating conditions under which they yield the same—or similar—results, and discusses their complementariness. The article is addressed to applied researchers. For those familiar only with the univariate approach, it contributes with conceptual simplicity on goodness-of-fit testing and a variety of tests for misspecification (Hausman test, heteroscedasticity, autocorrelation, etc.), and simplifies expanding the model to time-varying parameters, dynamics, measurement error, and so on. For all practitioners, the comparative and side-by-side analyses of the two approaches on two data sets— demonstration data and empirical data with missing values—contributes to broadening their perspective of panel data modeling and expanding their tools for analyses. Both univariate and multivariate analyses are performed in Stata and R

    Knowledge Worker Behavioral Responses and Job Outcomes in Mandatory Enterprise System Use Contexts

    Get PDF
    The three essays that comprise my dissertation are drawn from a longitudinal field study of the work process innovation of sourcing professionals at a large multinational paper products and related chemicals manufacturing firm. The focus of this study is an examination of how characteristics of the work process innovation context impact enterprise system (ES) acceptance, rich ES use behavior and the resulting individual-level job outcomes realized by knowledge workers in a strategic business process. The ES, an enterprise sourcing application, was introduced to innovate the work processes of employees who perform the sourcing business process. Over a period of 12 months, we collected survey data at four points in time (pre-implementation, immediately following training on the new system; following six months of use; and, following 12 months of use) to trace the innovation process as it unfolded. The three essays that comprise my dissertation focus on three key gaps in understanding and make three corresponding key contributions. The first research essay focuses on the transition from an emphasis on behavioral intention to mental acceptance in mandatory use environments. This essay contributes to the technology acceptance literature by finding that work process characteristics and implementation characteristics are exogenous to beliefs about the technology and that these beliefs are important to understanding mental acceptance as well in mandatory use contexts. The second and third research essays emphasize the transition from lean use concepts to conceptualizing, defining and measuring rich use behaviors and show that use must be captured and elaborated on in context. This is pursued through the development of two rich use constructs reflective of the sourcing work context and the complementary finding of countervailing factors in the work process that may impede the positive impact of rich use behaviors on job benefits

    Shaping and Signaling Mathematics: Examining Cases of Beginning Middle School Mathematics Teachers’ Instructional Development

    Get PDF
    How learners understand content is interwoven with the practices in which they engage. Classroom experiences of how students engage with mathematical ideas and problems shape the mathematics that is learned (Boaler, 2002; Franke, Kazemi, & Battey, 2007), affecting the mathematical learning opportunities and the ways in which learners may view the subject and their own knowledge and capability. Consequently, teaching mathematics necessitates attention and sensitivity both to content and to students, and it involves managing dilemmas while maintaining productive relationships (Lampert, 2001; Potari & Jaworski, 2002; Brodie, 2010). For novice teachers navigating multiple demands and expectations, the period of teacher induction (the first years of a teaching career) marks a unique time of teacher learning, when new teachers try, take up, modify, and discard instructional practices, based on perceived effectiveness. The induction years are a time of rehearsal, formation, and evolution of teaching practice. This dissertation presents a close study of instruction over time to illuminate the ways that normative practices may shape mathematical learning opportunities and signal messages about mathematics. The study examined the instructional practice of six novice middle school mathematics teachers teaching in a district with multiple ongoing initiatives to support mathematics instruction with an emphasis on rich tasks and discourse and new teachers’ learning. Applying an instrumental case study approach, the study used observation and interview data, analyzed with a grounded theory approach, to answer the research questions. The analysis illuminated multiple strands of normative practices that, when interwoven, composed instruction and shaped mathematical learning opportunities in either capped or promising ways. Over time these patterns tended to take hold, with certain practices amplified, supported by both contextual and individual factors. In attending to the nature and qualities of instruction of novice teachers in the induction years, the study bridges math education and teacher education to provide insights into how teachers’ actions shape what it means to do math in classrooms, what those actions signal about the discipline and what it means to know math, and what opportunities exist to support teacher capacity around teaching mathematics in a connected and relevant way
    • …
    corecore