72 research outputs found

    Separating Hierarchical and General Hub Labelings

    Full text link
    In the context of distance oracles, a labeling algorithm computes vertex labels during preprocessing. An s,ts,t query computes the corresponding distance from the labels of ss and tt only, without looking at the input graph. Hub labels is a class of labels that has been extensively studied. Performance of the hub label query depends on the label size. Hierarchical labels are a natural special kind of hub labels. These labels are related to other problems and can be computed more efficiently. This brings up a natural question of the quality of hierarchical labels. We show that there is a gap: optimal hierarchical labels can be polynomially bigger than the general hub labels. To prove this result, we give tight upper and lower bounds on the size of hierarchical and general labels for hypercubes.Comment: 11 pages, minor corrections, MFCS 201

    Hardness of Exact Distance Queries in Sparse Graphs Through Hub Labeling

    Full text link
    A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. An important class of distance labeling schemes is that of hub labelings, where a node vGv \in G stores its distance to the so-called hubs SvVS_v \subseteq V, chosen so that for any u,vVu,v \in V there is wSuSvw \in S_u \cap S_v belonging to some shortest uvuv path. Notice that for most existing graph classes, the best distance labelling constructions existing use at some point a hub labeling scheme at least as a key building block. Our interest lies in hub labelings of sparse graphs, i.e., those with E(G)=O(n)|E(G)| = O(n), for which we show a lowerbound of n2O(logn)\frac{n}{2^{O(\sqrt{\log n})}} for the average size of the hubsets. Additionally, we show a hub-labeling construction for sparse graphs of average size O(nRS(n)c)O(\frac{n}{RS(n)^{c}}) for some 0<c<10 < c < 1, where RS(n)RS(n) is the so-called Ruzsa-Szemer{\'e}di function, linked to structure of induced matchings in dense graphs. This implies that further improving the lower bound on hub labeling size to n2(logn)o(1)\frac{n}{2^{(\log n)^{o(1)}}} would require a breakthrough in the study of lower bounds on RS(n)RS(n), which have resisted substantial improvement in the last 70 years. For general distance labeling of sparse graphs, we show a lowerbound of 12O(logn)SumIndex(n)\frac{1}{2^{O(\sqrt{\log n})}} SumIndex(n), where SumIndex(n)SumIndex(n) is the communication complexity of the Sum-Index problem over ZnZ_n. Our results suggest that the best achievable hub-label size and distance-label size in sparse graphs may be Θ(n2(logn)c)\Theta(\frac{n}{2^{(\log n)^c}}) for some 0<c<10<c < 1

    Pruning based Distance Sketches with Provable Guarantees on Random Graphs

    Full text link
    Measuring the distances between vertices on graphs is one of the most fundamental components in network analysis. Since finding shortest paths requires traversing the graph, it is challenging to obtain distance information on large graphs very quickly. In this work, we present a preprocessing algorithm that is able to create landmark based distance sketches efficiently, with strong theoretical guarantees. When evaluated on a diverse set of social and information networks, our algorithm significantly improves over existing approaches by reducing the number of landmarks stored, preprocessing time, or stretch of the estimated distances. On Erd\"{o}s-R\'{e}nyi graphs and random power law graphs with degree distribution exponent 2<β<32 < \beta < 3, our algorithm outputs an exact distance data structure with space between Θ(n5/4)\Theta(n^{5/4}) and Θ(n3/2)\Theta(n^{3/2}) depending on the value of β\beta, where nn is the number of vertices. We complement the algorithm with tight lower bounds for Erdos-Renyi graphs and the case when β\beta is close to two.Comment: Full version for the conference paper to appear in The Web Conference'1

    Beyond Highway Dimension: Small Distance Labels Using Tree Skeletons

    Get PDF
    International audienceThe goal of a hub-based distance labeling scheme for a network G = (V, E) is to assign a small subset S(u) ⊆ V to each node u ∈ V, in such a way that for any pair of nodes u, v, the intersection of hub sets S(u) ∩ S(v) contains a node on the shortest uv-path. The existence of small hub sets, and consequently efficient shortest path processing algorithms, for road networks is an empirical observation. A theoretical explanation for this phenomenon was proposed by Abraham et al. (SODA 2010) through a network parameter they called highway dimension, which captures the size of a hitting set for a collection of shortest paths of length at least r intersecting a given ball of radius 2r. In this work, we revisit this explanation, introducing a more tractable (and directly comparable) parameter based solely on the structure of shortest-path spanning trees, which we call skeleton dimension. We show that skeleton dimension admits an intuitive definition for both directed and undirected graphs, provides a way of computing labels more efficiently than by using highway dimension, and leads to comparable or stronger theoretical bounds on hub set size

    PLoS One

    Get PDF
    Combinatorial therapies using voluntary exercise and diet supplementation with polyunsaturated fatty acids have synergistic effects benefiting brain function and behavior. Here, we assessed the effects of voluntary exercise on anxiety-like behavior and on total FA accumulation within three brain regions: cortex, hippocampus, and cerebellum of running versus sedentary young adult male C57/BL6J mice. The running group was subjected to one month of voluntary exercise in their home cages, while the sedentary group was kept in their home cages without access to a running wheel. Elevated plus maze (EPM), several behavioral postures and two risk assessment behaviors (RABs) were then measured in both animal groups followed immediately by blood samplings for assessment of corticosterone levels. Brains were then dissected for non-targeted lipidomic analysis of selected brain regions using gas chromatography coupled to mass spectrometry (GC/MS). Results showed that mice in the running group, when examined in the EPM, displayed significantly lower anxiety-like behavior, higher exploratory and risky behaviors, compared to sedentary mice. Notably, we found no differences in blood corticosterone levels between the two groups, suggesting that the different EPM and RAB behaviors were not related to reduced physiological stress in the running mice. Lipidomics analysis revealed a region-specific cortical decrease of the saturated FA: palmitate (C16:0) and a concomitant increase of polyunsaturated FA, arachidonic acid (AA, omega 6-C20: 4) and docosahexaenoic acid (DHA, omega 3-C22: 6), in running mice compared to sedentary controls. Finally, we found that running mice, as opposed to sedentary animals, showed significantly enhanced cortical expression of phospholipase A2 (PLA2) protein, a signaling molecule required in the production of both AA and DHA. In summary, our data support the anxiolytic effects of exercise and provide insights into the molecular processes modulated by exercise that may lead to its beneficial effects on mood.5R25GM061151/GM/NIGMS NIH HHS/United States5SC1MH086072/MH/NIMH NIH HHS/United StatesIP20MD003355/IP/NCIRD CDC HHS/United StatesP20RR016470/8P20GM103475/GM/NIGMS NIH HHS/United StatesSC1GM084708/GM/NIGMS NIH HHS/United States24349072PMC385949

    Identifying power relationships in dialogues

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 175-179).Understanding power relationships is an important step towards building computers that can understand human social relationships. Power relationships can arise due to dierences in the roles of the speakers, as between bosses and employees. Power can also affect the manner of communication between social equals, as between friends or acquaintances. There are numerous potential uses for an automatic system that can understand power relationships. These include: the analysis of the organizational structure of formal and ad-hoc groups, the profiling of in influential individuals within a group, or identifying aggressive or power-inappropriate language in email or other Internet media. In this thesis, we explore the problem of engineering eective power identication systems. We show methods for constructing an eective ground truth corpus for analyzing power. We focus on three areas of modeling that help in improving the prediction of power relationships. 1) Utterance Level Language Cues - patterns of language use can help distinguish the speech of leaders or followers. We show a set of eective syntactic/semantic features that best capture these linguistic manifestations of power. 2) Dialog Level Interactions - the manner of interaction between speakers can inform us about the underlying power dynamics. We use Hidden Markov Models to organize and model the information from these interaction-based cues. 3) Social conventions - speaker behavior is in influenced by their background knowledge, in particular, conventional rules of communication. We use a generative hierarchical Bayesian framework to model dialogs as mental processes; then we extend these models to include components that encode basic social conventions such as politeness. We apply our integrated system, PRISM, on the Nixon Watergate Transcripts, to demonstrate that our system can perform robustly on real world data.by Yuan Kui Shen.Ph.D

    Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence

    Get PDF
    We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search
    corecore