23 research outputs found

    The equidistant dimension of graphs

    Get PDF
    A subset S of vertices of a connected graph G is a distance-equalizer set if for every two distinct vertices x,y¿V(G)\S there is a vertex w¿S such that the distances from x and y to w are the same. The equidistant dimension of G is the minimum cardinality of a distance-equalizer set of G. This paper is devoted to introduce this parameter and explore its properties and applications to other mathematical problems, not necessarily in the context of graph theory. Concretely, we first establish some bounds concerning the order, the maximum degree, the clique number, and the independence number, and characterize all graphs attaining some extremal values. We then study the equidistant dimension of several families of graphs (complete and complete multipartite graphs, bistars, paths, cycles, and Johnson graphs), proving that, in the case of paths and cycles, this parameter is related to 3-AP-free sets. Subsequently, we show the usefulness of distance-equalizer sets for constructing doubly resolving sets.Peer ReviewedPostprint (published version

    Spokane Intercollegiate Research Conference 2021

    Get PDF

    Acta Universitatis Sapientiae - Informatica 2021

    Get PDF

    The metric dimension for resolving several objects

    Get PDF
    A set of vertices S is a resolving set in a graph if each vertex has a unique array of distances to the vertices of S. The natural problem of finding the smallest cardinality of a resolving set in a graph has been widely studied over the years. In this paper, we wish to resolve a set of vertices (up to l vertices) instead of just one vertex with the aid of the array of distances. The smallest cardinality of a set S resolving at most l vertices is called l-set-metric dimension. We study the problem of the l-set-metric dimension in two infinite classes of graphs, namely, the two dimensional grid graphs and the n-dimensional binary hypercubes. (C) 2016 Elsevier B.V. All rights reserved

    Maximizing the Benefits of Collaborative Learning in the College Classroom

    Get PDF
    abstract: This study tested the effects of two kinds of cognitive, domain-based preparation tasks on learning outcomes after engaging in a collaborative activity with a partner. The collaborative learning method of interest was termed "preparing-to-interact," and is supported in theory by the Preparation for Future Learning (PFL) paradigm and the Interactive-Constructive-Active-Passive (ICAP) framework. The current work combined these two cognitive-based approaches to design collaborative learning activities that can serve as alternatives to existing methods, which carry limitations and challenges. The "preparing-to-interact" method avoids the need for training students in specific collaboration skills or guiding/scripting their dialogic behaviors, while providing the opportunity for students to acquire the necessary prior knowledge for maximizing their discussions towards learning. The study used a 2x2 experimental design, investigating the factors of Preparation (No Prep and Prep) and Type of Activity (Active and Constructive) on deep and shallow learning. The sample was community college students in introductory psychology classes; the domain tested was "memory," in particular, concepts related to the process of remembering/forgetting information. Results showed that Preparation was a significant factor affecting deep learning, while shallow learning was not affected differently by the interventions. Essentially, equalizing time-on-task and content across all conditions, time spent individually preparing by working on the task alone and then discussing the content with a partner produced deeper learning than engaging in the task jointly for the duration of the learning period. Type of Task was not a significant factor in learning outcomes, however, exploratory analyses showed evidence of Constructive-type behaviors leading to deeper learning of the content. Additionally, a novel method of multilevel analysis (MLA) was used to examine the data to account for the dependency between partners within dyads. This work showed that "preparing-to-interact" is a way to maximize the benefits of collaborative learning. When students are first cognitively prepared, they seem to make the most efficient use of discussion towards learning, engage more deeply in the content during learning, leading to deeper knowledge of the content. Additionally, in using MLA to account for subject nonindependency, this work introduces new questions about the validity of statistical analyses for dyadic data.Dissertation/ThesisPh.D. Educational Psychology 201

    Codes from uniform subset graphs and cycle products

    Get PDF
    Philosophiae Doctor - PhDIn this thesis only Binary codes are studied. Firstly, the codes overs the field GF(2) by the adjacency matrix of the complement T(n), ofthe triangular graph, are examined. It is shown that the code obtained is the full space F2 s(n/2) when n= 0 (mod 4) and the dual code of the space generated by the j-vector when n= 2(mod 4). The codes from the other two cases are less trivial: when n=1 (mod 4) the code is [(n 2), (n 2 ) - n + 1, 3] code, and when n = 3 (mod 4) it is an [(n 2), (n 2) - n, 4 ] code.South Afric

    Symmetry in Graph Theory

    Get PDF
    This book contains the successful invited submissions to a Special Issue of Symmetry on the subject of ""Graph Theory"". Although symmetry has always played an important role in Graph Theory, in recent years, this role has increased significantly in several branches of this field, including but not limited to Gromov hyperbolic graphs, the metric dimension of graphs, domination theory, and topological indices. This Special Issue includes contributions addressing new results on these topics, both from a theoretical and an applied point of view

    Finding structure in language

    Get PDF
    Since the Chomskian revolution, it has become apparent that natural language is richly structured, being naturally represented hierarchically, and requiring complex context sensitive rules to define regularities over these representations. It is widely assumed that the richness of the posited structure has strong nativist implications for mechanisms which might learn natural language, since it seemed unlikely that such structures could be derived directly from the observation of linguistic data (Chomsky 1965).This thesis investigates the hypothesis that simple statistics of a large, noisy, unlabelled corpus of natural language can be exploited to discover some of the structure which exists in natural language automatically. The strategy is to initially assume no knowledge of the structures present in natural language, save that they might be found by analysing statistical regularities which pertain between a word and the words which typically surround it in the corpus.To achieve this, various statistical methods are applied to define similarity between statistical distributions, and to infer a structure for a domain given knowledge of the similarities which pertain within it. Using these tools, it is shown that it is possible to form a hierarchical classification of many domains, including words in natural language. When this is done, it is shown that all the major syntactic categories can be obtained, and the classification is both relatively complete, and very much in accord with a standard linguistic conception of how words are classified in natural language.Once this has been done, the categorisation derived is used as the basis of a similar classification of short sequences of words. If these are analysed in a similar way, then several syntactic categories can be derived. These include simple noun phrases, various tensed forms of verbs, and simple prepositional phrases. Once this has been done, the same technique can be applied one level higher, and at this level simple sentences and verb phrases, as well as more complicated noun phrases and prepositional phrases, are shown to be derivable
    corecore