1,180 research outputs found
Structural matching by discrete relaxation
This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter
Structural graph matching using the EM algorithm and singular value decomposition
This paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions: 1) commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm; and 2) we cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows one to efficiently recover correspondence matches using the singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding method
An Adynamical, Graphical Approach to Quantum Gravity and Unification
We use graphical field gradients in an adynamical, background independent
fashion to propose a new approach to quantum gravity and unification. Our
proposed reconciliation of general relativity and quantum field theory is based
on a modification of their graphical instantiations, i.e., Regge calculus and
lattice gauge theory, respectively, which we assume are fundamental to their
continuum counterparts. Accordingly, the fundamental structure is a graphical
amalgam of space, time, and sources (in parlance of quantum field theory)
called a "spacetimesource element." These are fundamental elements of space,
time, and sources, not source elements in space and time. The transition
amplitude for a spacetimesource element is computed using a path integral with
discrete graphical action. The action for a spacetimesource element is
constructed from a difference matrix K and source vector J on the graph, as in
lattice gauge theory. K is constructed from graphical field gradients so that
it contains a non-trivial null space and J is then restricted to the row space
of K, so that it is divergence-free and represents a conserved exchange of
energy-momentum. This construct of K and J represents an adynamical global
constraint between sources, the spacetime metric, and the energy-momentum
content of the element, rather than a dynamical law for time-evolved entities.
We use this approach via modified Regge calculus to correct proper distance in
the Einstein-deSitter cosmology model yielding a fit of the Union2 Compilation
supernova data that matches LambdaCDM without having to invoke accelerating
expansion or dark energy. A similar modification to lattice gauge theory
results in an adynamical account of quantum interference.Comment: 47 pages text, 14 figures, revised per recent results, e.g., dark
energy result
Graph matching using position coordinates and local features for image analysis
Encontrar las correspondencias entre dos imágenes es un problema crucial en el campo de la visiĂłn por ordenador i el reconocimiento de patrones. Es relevante para un amplio rango de propĂłsitos des de aplicaciones de reconocimiento de objetos en las áreas de biometrĂa, análisis de documentos i análisis de formas hasta aplicaciones relacionadas con la geometrĂa desde mĂşltiples puntos de vista tales cĂłmo la recuperaciĂłn de la pose, estructura desde el movimiento y localizaciĂłn y mapeo.
La mayorĂa de las tĂ©cnicas existentes enfocan este problema o bien usando caracterĂsticas locales en la imagen o bien usando mĂ©todos de registro de conjuntos de puntos (o bien una mezcla de ambos). En las primeras, un conjunto disperso de caracterĂsticas es primeramente extraĂdo de las imágenes y luego caracterizado en la forma de vectores descriptores usando evidencias locales de la imagen. Las caracterĂsticas son asociadas segĂşn la similitud entre sus descriptores. En las segundas, los conjuntos de caracterĂsticas son considerados cĂłmo conjuntos de puntos los cuales son asociados usando tĂ©cnicas de optimizaciĂłn no lineal. Estos son procedimientos iterativos que estiman los parámetros de correspondencia y de alineamiento en pasos alternados.
Los grafos son representaciones que contemplan relaciones binarias entre las caracterĂsticas. Tener en cuenta relaciones binarias al problema de la correspondencia a menudo lleva al llamado problema del emparejamiento de grafos. Existe cierta cantidad de mĂ©todos en la literatura destinados a encontrar soluciones aproximadas a diferentes instancias del problema de emparejamiento de grafos, que en la mayorĂa de casos es del tipo "NP-hard".
El cuerpo de trabajo principal de esta tesis está dedicado a formular ambos problemas de asociaciĂłn de caracterĂsticas de imagen y registro de conjunto de puntos como instancias del problema de emparejamiento de grafos. En todos los casos proponemos algoritmos aproximados para solucionar estos problemas y nos comparamos con un nĂşmero de mĂ©todos existentes pertenecientes a diferentes áreas como eliminadores de "outliers", mĂ©todos de registro de conjuntos de puntos y otros mĂ©todos de emparejamiento de grafos.
Los experimentos muestran que en la mayorĂa de casos los mĂ©todos propuestos superan al resto. En ocasiones los mĂ©todos propuestos o bien comparten el mejor rendimiento con algĂşn mĂ©todo competidor o bien obtienen resultados ligeramente peores. En estos casos, los mĂ©todos propuestos normalmente presentan tiempos computacionales inferiores.Trobar les correspondències entre dues imatges Ă©s un problema crucial en el camp de la visiĂł per ordinador i el reconeixement de patrons. És rellevant per un ampli ventall de propòsits des d’aplicacions de reconeixement d’objectes en les Ă rees de biometria, anĂ lisi de documents i anĂ lisi de formes fins aplicacions relacionades amb geometria des de mĂşltiples punts de vista tals com recuperaciĂł de pose, estructura des del moviment i localitzaciĂł i mapeig.
La majoria de les tècniques existents enfoquen aquest problema o bĂ© usant caracterĂstiques locals a la imatge o bĂ© usant mètodes de registre de conjunts de punts (o bĂ© una mescla d’ambdĂłs). En les primeres, un conjunt dispers de caracterĂstiques Ă©s primerament extret de les imatges i desprĂ©s caracteritzat en la forma de vectors descriptors usant evidències locals de la imatge. Les caracterĂstiques son associades segons la similitud entre els seus descriptors. En les segones, els conjunts de caracterĂstiques son considerats com conjunts de punts els quals son associats usant tècniques d’optimitzaciĂł no lineal. Aquests son procediments iteratius que estimen els parĂ metres de correspondència i d’alineament en passos alternats.
Els grafs son representacions que contemplen relacions binaries entre les caracterĂstiques. Tenir en compte relacions binĂ ries al problema de la correspondència sovint porta a l’anomenat problema de l’emparellament de grafs. Existeix certa quantitat de mètodes a la literatura destinats a trobar solucions aproximades a diferents instĂ ncies del problema d’emparellament de grafs, el qual en la majoria de casos Ă©s del tipus “NP-hard”.
Una part del nostre treball estĂ dedicat a investigar els beneficis de les mesures de ``bins'' creuats per a la comparaciĂł de caracterĂstiques locals de les imatges.
La resta estĂ dedicat a formular ambdĂłs problemes d’associaciĂł de caracterĂstiques d’imatge i registre de conjunt de punts com a instĂ ncies del problema d’emparellament de grafs. En tots els casos proposem algoritmes aproximats per solucionar aquests problemes i ens comparem amb un nombre de mètodes existents pertanyents a diferents Ă rees com eliminadors d’“outliers”, mètodes de registre de conjunts de punts i altres mètodes d’emparellament de grafs.
Els experiments mostren que en la majoria de casos els mètodes proposats superen a la resta. En ocasions els mètodes proposats o bé comparteixen el millor rendiment amb algun mètode competidor o bé obtenen resultats lleugerament pitjors. En aquests casos, els mètodes proposats normalment presenten temps computacionals inferiors
A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse.
The work presented in this thesis is principally concerned with the development of a method and set of tools designed to support the identification of class-based similarity in collections of object-oriented code. Attention is focused on enhancing the potential for software reuse in situations where a reuse process is either absent or informal, and the characteristics of the organisation are unsuitable, or resources unavailable, to promote and sustain a systematic approach to reuse. The approach builds on the definition of a formal, attributed, relational model that captures the inherent structure of class-based, object-oriented code. Based on code-level analysis, it relies solely on the structural characteristics of the code and the peculiarly object-oriented features of the class as an organising principle: classes, those entities comprising a class, and the intra and inter-class relationships existing between them, are significant factors in defining a two-phase similarity measure as a basis for the comparison process. Established graph-theoretic techniques are adapted and applied via this model to the problem of determining similarity between classes. This thesis illustrates a successful transfer of techniques from the domains of molecular chemistry and computer vision. Both domains provide an existing template for the analysis and comparison of structures as graphs. The inspiration for representing classes as attributed relational graphs, and the application of graph-theoretic techniques and algorithms to their comparison, arose out of a well-founded intuition that a common basis in graph-theory was sufficient to enable a reasonable transfer of these techniques to the problem of determining similarity in object-oriented code. The practical application of this work relates to the identification and indexing of instances of recurring, class-based, common structure present in established and evolving collections of object-oriented code. A classification so generated additionally provides a framework for class-based matching over an existing code-base, both from the perspective of newly introduced classes, and search "templates" provided by those incomplete, iteratively constructed and refined classes associated with current and on-going development. The tools and techniques developed here provide support for enabling and improving shared awareness of reuse opportunity, based on analysing structural similarity in past and ongoing development, tools and techniques that can in turn be seen as part of a process of domain analysis, capable of stimulating the evolution of a systematic reuse ethic
Univariate Versus Multivariate Modeling of Panel Data: Model Specification and Goodness-of-Fit Testing
Two approaches are commonly in use for analyzing panel data: the univariate, which arranges data in
long format and estimates just one regression equation; and the multivariate, which arranges data in
wide format, and simultaneously estimates a set of regression equations. Although technical articles
relating the two approaches exist, they do not seem to have had an impact in organizational
research. This article revisits the connection between the univariate and multivariate approaches,
elucidating conditions under which they yield the same—or similar—results, and discusses their
complementariness. The article is addressed to applied researchers. For those familiar only with the
univariate approach, it contributes with conceptual simplicity on goodness-of-fit testing and a variety
of tests for misspecification (Hausman test, heteroscedasticity, autocorrelation, etc.), and simplifies
expanding the model to time-varying parameters, dynamics, measurement error, and so on. For all
practitioners, the comparative and side-by-side analyses of the two approaches on two data sets—
demonstration data and empirical data with missing values—contributes to broadening their
perspective of panel data modeling and expanding their tools for analyses. Both univariate and
multivariate analyses are performed in Stata and R
Knowledge Worker Behavioral Responses and Job Outcomes in Mandatory Enterprise System Use Contexts
The three essays that comprise my dissertation are drawn from a longitudinal field study of the work process innovation of sourcing professionals at a large multinational paper products and related chemicals manufacturing firm. The focus of this study is an examination of how characteristics of the work process innovation context impact enterprise system (ES) acceptance, rich ES use behavior and the resulting individual-level job outcomes realized by knowledge workers in a strategic business process. The ES, an enterprise sourcing application, was introduced to innovate the work processes of employees who perform the sourcing business process.
Over a period of 12 months, we collected survey data at four points in time (pre-implementation, immediately following training on the new system; following six months of use; and, following 12 months of use) to trace the innovation process as it unfolded. The three essays that comprise my dissertation focus on three key gaps in understanding and make three corresponding key contributions.
The first research essay focuses on the transition from an emphasis on behavioral intention to mental acceptance in mandatory use environments. This essay contributes to the technology acceptance literature by finding that work process characteristics and implementation characteristics are exogenous to beliefs about the technology and that these beliefs are important to understanding mental acceptance as well in mandatory use contexts. The second and third research essays emphasize the transition from lean use concepts to conceptualizing, defining and measuring rich use behaviors and show that use must be captured and elaborated on in context. This is pursued through the development of two rich use constructs reflective of the sourcing work context and the complementary finding of countervailing factors in the work process that may impede the positive impact of rich use behaviors on job benefits
Shaping and Signaling Mathematics: Examining Cases of Beginning Middle School Mathematics Teachers’ Instructional Development
How learners understand content is interwoven with the practices in which they engage. Classroom experiences of how students engage with mathematical ideas and problems shape the mathematics that is learned (Boaler, 2002; Franke, Kazemi, & Battey, 2007), affecting the mathematical learning opportunities and the ways in which learners may view the subject and their own knowledge and capability. Consequently, teaching mathematics necessitates attention and sensitivity both to content and to students, and it involves managing dilemmas while maintaining productive relationships (Lampert, 2001; Potari & Jaworski, 2002; Brodie, 2010). For novice teachers navigating multiple demands and expectations, the period of teacher induction (the first years of a teaching career) marks a unique time of teacher learning, when new teachers try, take up, modify, and discard instructional practices, based on perceived effectiveness. The induction years are a time of rehearsal, formation, and evolution of teaching practice.
This dissertation presents a close study of instruction over time to illuminate the ways that normative practices may shape mathematical learning opportunities and signal messages about mathematics. The study examined the instructional practice of six novice middle school mathematics teachers teaching in a district with multiple ongoing initiatives to support mathematics instruction with an emphasis on rich tasks and discourse and new teachers’ learning. Applying an instrumental case study approach, the study used observation and interview data, analyzed with a grounded theory approach, to answer the research questions. The analysis illuminated multiple strands of normative practices that, when interwoven, composed instruction and shaped mathematical learning opportunities in either capped or promising ways. Over time these patterns tended to take hold, with certain practices amplified, supported by both contextual and individual factors.
In attending to the nature and qualities of instruction of novice teachers in the induction years, the study bridges math education and teacher education to provide insights into how teachers’ actions shape what it means to do math in classrooms, what those actions signal about the discipline and what it means to know math, and what opportunities exist to support teacher capacity around teaching mathematics in a connected and relevant way
- …