289 research outputs found

    Characterization of Randomly k-Dimensional Graphs

    Full text link
    For an ordered set W={w1,w2,...,wk}W=\{w_1,w_2,...,w_k\} of vertices and a vertex vv in a connected graph GG, the ordered kk-vector r(v∣W):=(d(v,w1),d(v,w2),.,d(v,wk))r(v|W):=(d(v,w_1),d(v,w_2),.,d(v,w_k)) is called the (metric) representation of vv with respect to WW, where d(x,y)d(x,y) is the distance between the vertices xx and yy. The set WW is called a resolving set for GG if distinct vertices of GG have distinct representations with respect to WW. A minimum resolving set for GG is a basis of GG and its cardinality is the metric dimension of GG. The resolving number of a connected graph GG is the minimum kk, such that every kk-set of vertices of GG is a resolving set. A connected graph GG is called randomly kk-dimensional if each kk-set of vertices of GG is a basis. In this paper, along with some properties of randomly kk-dimensional graphs, we prove that a connected graph GG with at least two vertices is randomly kk-dimensional if and only if GG is complete graph Kk+1K_{k+1} or an odd cycle.Comment: 12 pages, 3 figure

    Advances in Engineering Software for Multicore Systems

    Get PDF
    The vast amounts of data to be processed by today’s applications demand higher computational power. To meet application requirements and achieve reasonable application performance, it becomes increasingly profitable, or even necessary, to exploit any available hardware parallelism. For both new and legacy applications, successful parallelization is often subject to high cost and price. This chapter proposes a set of methods that employ an optimistic semi-automatic approach, which enables programmers to exploit parallelism on modern hardware architectures. It provides a set of methods, including an LLVM-based tool, to help programmers identify the most promising parallelization targets and understand the key types of parallelism. The approach reduces the manual effort needed for parallelization. A contribution of this work is an efficient profiling method to determine the control and data dependences for performing parallelism discovery or other types of code analysis. Another contribution is a method for detecting code sections where parallel design patterns might be applicable and suggesting relevant code transformations. Our approach efficiently reports detailed runtime data dependences. It accurately identifies opportunities for parallelism and the appropriate type of parallelism to use as task-based or loop-based
    • …
    corecore