5,131 research outputs found
Nodes and Arcs: Concept Map, Semiotics, and Knowledge Organization.
Purpose â The purpose of the research reported here is to improve comprehension of the socially-negotiated identity of concepts in the domain of knowledge organization. Because knowledge organization as a domain has as its focus the order of concepts, both from a theoretical perspective and from an applied perspective, it is important to understand how the domain itself understands the meaning of a concept.
Design/methodology/approach â The paper provides an empirical demonstration of how the domain itself understands the meaning of a concept. The paper employs content analysis to demonstrate the ways in which concepts are portrayed in KO concept maps as signs, and they are subjected to evaluative semiotic analysis as a way to understand their meaning. The frame was the entire population of formal proceedings in knowledge organization â all proceedings of the
International Society for Knowledge Organizationâs international conferences (1990-2010) and those of the annual classification workshops of the Special Interest Group for Classification Research of the American Society for Information Science and Technology (SIG/CR).
Findings â A total of 344 concept maps were analyzed. There was no discernible chronological pattern. Most concept maps were created by authors who were professors from the USA, Germany, France, or Canada. Roughly half were judged to contain semiotic content. Peirceian semiotics predominated, and tended to convey greater granularity and complexity in conceptual terminology.
Nodes could be identified as anchors of conceptual clusters in the domain; the arcs were identifiable as verbal relationship indicators. Saussurian concept maps were more applied than theoretical; Peirceian concept maps had more theoretical content.
Originality/value â The paper demonstrates important empirical evidence about the coherence of the domain of knowledge organization. Core values are conveyed across time through the concept maps in this population of conference paper
Bounds for graph regularity and removal lemmas
We show, for any positive integer k, that there exists a graph in which any
equitable partition of its vertices into k parts has at least ck^2/\log^* k
pairs of parts which are not \epsilon-regular, where c,\epsilon>0 are absolute
constants. This bound is tight up to the constant c and addresses a question of
Gowers on the number of irregular pairs in Szemer\'edi's regularity lemma.
In order to gain some control over irregular pairs, another regularity lemma,
known as the strong regularity lemma, was developed by Alon, Fischer,
Krivelevich, and Szegedy. For this lemma, we prove a lower bound of
wowzer-type, which is one level higher in the Ackermann hierarchy than the
tower function, on the number of parts in the strong regularity lemma,
essentially matching the upper bound. On the other hand, for the induced graph
removal lemma, the standard application of the strong regularity lemma, we find
a different proof which yields a tower-type bound.
We also discuss bounds on several related regularity lemmas, including the
weak regularity lemma of Frieze and Kannan and the recently established regular
approximation theorem. In particular, we show that a weak partition with
approximation parameter \epsilon may require as many as
2^{\Omega(\epsilon^{-2})} parts. This is tight up to the implied constant and
solves a problem studied by Lov\'asz and Szegedy.Comment: 62 page
L-selectin mediated leukocyte tethering in shear flow is controlled by multiple contacts and cytoskeletal anchorage facilitating fast rebinding events
L-selectin mediated tethers result in leukocyte rolling only above a
threshold in shear. Here we present biophysical modeling based on recently
published data from flow chamber experiments (Dwir et al., J. Cell Biol. 163:
649-659, 2003) which supports the interpretation that L-selectin mediated
tethers below the shear threshold correspond to single L-selectin carbohydrate
bonds dissociating on the time scale of milliseconds, whereas L-selectin
mediated tethers above the shear threshold are stabilized by multiple bonds and
fast rebinding of broken bonds, resulting in tether lifetimes on the timescale
of seconds. Our calculations for cluster dissociation suggest that
the single molecule rebinding rate is of the order of Hz. A similar
estimate results if increased tether dissociation for tail-truncated L-selectin
mutants above the shear threshold is modeled as diffusive escape of single
receptors from the rebinding region due to increased mobility. Using computer
simulations, we show that our model yields first order dissociation kinetics
and exponential dependence of tether dissociation rates on shear stress. Our
results suggest that multiple contacts, cytoskeletal anchorage of L-selectin
and local rebinding of ligand play important roles in L-selectin tether
stabilization and progression of tethers into persistent rolling on endothelial
surfaces.Comment: 9 pages, Revtex, 4 Postscript figures include
Heavy Hitters and the Structure of Local Privacy
We present a new locally differentially private algorithm for the heavy
hitters problem which achieves optimal worst-case error as a function of all
standardly considered parameters. Prior work obtained error rates which depend
optimally on the number of users, the size of the domain, and the privacy
parameter, but depend sub-optimally on the failure probability.
We strengthen existing lower bounds on the error to incorporate the failure
probability, and show that our new upper bound is tight with respect to this
parameter as well. Our lower bound is based on a new understanding of the
structure of locally private protocols. We further develop these ideas to
obtain the following general results beyond heavy hitters.
Advanced Grouposition: In the local model, group privacy for
users degrades proportionally to , instead of linearly in
as in the central model. Stronger group privacy yields improved max-information
guarantees, as well as stronger lower bounds (via "packing arguments"), over
the central model.
Building on a transformation of Bassily and Smith (STOC 2015), we
give a generic transformation from any non-interactive approximate-private
local protocol into a pure-private local protocol. Again in contrast with the
central model, this shows that we cannot obtain more accurate algorithms by
moving from pure to approximate local privacy
Compressibility and probabilistic proofs
We consider several examples of probabilistic existence proofs using
compressibility arguments, including some results that involve Lov\'asz local
lemma.Comment: Invited talk for CiE 2017 (full version
Detecting and Characterizing Small Dense Bipartite-like Subgraphs by the Bipartiteness Ratio Measure
We study the problem of finding and characterizing subgraphs with small
\textit{bipartiteness ratio}. We give a bicriteria approximation algorithm
\verb|SwpDB| such that if there exists a subset of volume at most and
bipartiteness ratio , then for any , it finds a set
of volume at most and bipartiteness ratio at most
. By combining a truncation operation, we give a local
algorithm \verb|LocDB|, which has asymptotically the same approximation
guarantee as the algorithm \verb|SwpDB| on both the volume and bipartiteness
ratio of the output set, and runs in time
, independent of the size of the
graph. Finally, we give a spectral characterization of the small dense
bipartite-like subgraphs by using the th \textit{largest} eigenvalue of the
Laplacian of the graph.Comment: 17 pages; ISAAC 201
Efficiently decoding Reed-Muller codes from random errors
Reed-Muller codes encode an -variate polynomial of degree by
evaluating it on all points in . We denote this code by .
The minimal distance of is and so it cannot correct more
than half that number of errors in the worst case. For random errors one may
hope for a better result.
In this work we give an efficient algorithm (in the block length ) for
decoding random errors in Reed-Muller codes far beyond the minimal distance.
Specifically, for low rate codes (of degree ) we can correct a
random set of errors with high probability. For high rate codes
(of degree for ), we can correct roughly
errors.
More generally, for any integer , our algorithm can correct any error
pattern in for which the same erasure pattern can be corrected
in . The results above are obtained by applying recent results
of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and
Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct
random erasures.
The algorithm is based on solving a carefully defined set of linear equations
and thus it is significantly different than other algorithms for decoding
Reed-Muller codes that are based on the recursive structure of the code. It can
be seen as a more explicit proof of a result of Abbe et al. that shows a
reduction from correcting erasures to correcting errors, and it also bares some
similarities with the famous Berlekamp-Welch algorithm for decoding
Reed-Solomon codes.Comment: 18 pages, 2 figure
- âŠ