11,783 research outputs found

    A new theory of space syntax

    Get PDF
    Relations between different components of urban structure are often measured in aliteral manner, along streets for example, the usual representation being routesbetween junctions which form the nodes of an equivalent planar graph. A popularvariant on this theme ? space syntax ? treats these routes as streets containing one ormore junctions, with the equivalent graph representation being more abstract, basedon relations between the streets which themselves are treated as nodes. In this paper,we articulate space syntax as a specific case of relations between any two sets, in thiscase, streets and their junctions, from which we derive two related representations.The first or primal problem is traditional space syntax based on relations betweenstreets through their junctions; the second or dual problem is the more usualmorphological representation of relations between junctions through their streets.The unifying framework that we propose suggests we shift our focus from the primalproblem where accessibility or distance is associated with lines or streets, to the dualproblem where accessibility is associated with points or junctions. This traditionalrepresentation of accessibility between points rather than between lines is easier tounderstand and makes more sense visually. Our unifying framework enables us toeasily shift from the primal problem to the dual and back, thus providing a muchricher interpretation of the syntax. We develop an appropriate algebra which providesa clearer approach to connectivity and distance in the equivalent graphrepresentations, and we then demonstrate these variants for the primal and dualproblems in one of the first space syntax street network examples, the French villageof Gassin. An immediate consequence of our analysis is that we show how the directconnectivity of streets (or junctions) to one another is highly correlated with thedistance measures used. This suggests that a simplified form of syntax can beoperationalized through counts of streets and junctions in the original street network

    Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    Full text link
    We present two graph-based algorithms for multiclass segmentation of high-dimensional data. The algorithms use a diffuse interface model based on the Ginzburg-Landau functional, related to total variation compressed sensing and image processing. A multiclass extension is introduced using the Gibbs simplex, with the functional's double-well potential modified to handle the multiclass case. The first algorithm minimizes the functional using a convex splitting numerical scheme. The second algorithm is a uses a graph adaptation of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates between diffusion and thresholding. We demonstrate the performance of both algorithms experimentally on synthetic data, grayscale and color images, and several benchmark data sets such as MNIST, COIL and WebKB. We also make use of fast numerical solvers for finding the eigenvectors and eigenvalues of the graph Laplacian, and take advantage of the sparsity of the matrix. Experiments indicate that the results are competitive with or better than the current state-of-the-art multiclass segmentation algorithms.Comment: 14 page

    P?=NP as minimization of degree 4 polynomial, integration or Grassmann number problem, and new graph isomorphism problem approaches

    Full text link
    While the P vs NP problem is mainly approached form the point of view of discrete mathematics, this paper proposes reformulations into the field of abstract algebra, geometry, fourier analysis and of continuous global optimization - which advanced tools might bring new perspectives and approaches for this question. The first one is equivalence of satisfaction of 3-SAT problem with the question of reaching zero of a nonnegative degree 4 multivariate polynomial (sum of squares), what could be tested from the perspective of algebra by using discriminant. It could be also approached as a continuous global optimization problem inside [0,1]n[0,1]^n, for example in physical realizations like adiabatic quantum computers. However, the number of local minima usually grows exponentially. Reducing to degree 2 polynomial plus constraints of being in {0,1}n\{0,1\}^n, we get geometric formulations as the question if plane or sphere intersects with {0,1}n\{0,1\}^n. There will be also presented some non-standard perspectives for the Subset-Sum, like through convergence of a series, or zeroing of ∫02π∏icos⁥(φki)dφ\int_0^{2\pi} \prod_i \cos(\varphi k_i) d\varphi fourier-type integral for some natural kik_i. The last discussed approach is using anti-commuting Grassmann numbers Ξi\theta_i, making (A⋅diag(Ξi))n(A \cdot \textrm{diag}(\theta_i))^n nonzero only if AA has a Hamilton cycle. Hence, the P≠\neNP assumption implies exponential growth of matrix representation of Grassmann numbers. There will be also discussed a looking promising algebraic/geometric approach to the graph isomorphism problem -- tested to successfully distinguish strongly regular graphs with up to 29 vertices.Comment: 19 pages, 8 figure

    Thematically Reinforced Explicit Semantic Analysis

    Full text link
    We present an extended, thematically reinforced version of Gabrilovich and Markovitch's Explicit Semantic Analysis (ESA), where we obtain thematic information through the category structure of Wikipedia. For this we first define a notion of categorical tfidf which measures the relevance of terms in categories. Using this measure as a weight we calculate a maximal spanning tree of the Wikipedia corpus considered as a directed graph of pages and categories. This tree provides us with a unique path of "most related categories" between each page and the top of the hierarchy. We reinforce tfidf of words in a page by aggregating it with categorical tfidfs of the nodes of these paths, and define a thematically reinforced ESA semantic relatedness measure which is more robust than standard ESA and less sensitive to noise caused by out-of-context words. We apply our method to the French Wikipedia corpus, evaluate it through a text classification on a 37.5 MB corpus of 20 French newsgroups and obtain a precision increase of 9-10% compared with standard ESA.Comment: 13 pages, 2 figures, presented at CICLing 201
    • 

    corecore