6,034 research outputs found

    L-selectin mediated leukocyte tethering in shear flow is controlled by multiple contacts and cytoskeletal anchorage facilitating fast rebinding events

    Full text link
    L-selectin mediated tethers result in leukocyte rolling only above a threshold in shear. Here we present biophysical modeling based on recently published data from flow chamber experiments (Dwir et al., J. Cell Biol. 163: 649-659, 2003) which supports the interpretation that L-selectin mediated tethers below the shear threshold correspond to single L-selectin carbohydrate bonds dissociating on the time scale of milliseconds, whereas L-selectin mediated tethers above the shear threshold are stabilized by multiple bonds and fast rebinding of broken bonds, resulting in tether lifetimes on the timescale of 10110^{-1} seconds. Our calculations for cluster dissociation suggest that the single molecule rebinding rate is of the order of 10410^4 Hz. A similar estimate results if increased tether dissociation for tail-truncated L-selectin mutants above the shear threshold is modeled as diffusive escape of single receptors from the rebinding region due to increased mobility. Using computer simulations, we show that our model yields first order dissociation kinetics and exponential dependence of tether dissociation rates on shear stress. Our results suggest that multiple contacts, cytoskeletal anchorage of L-selectin and local rebinding of ligand play important roles in L-selectin tether stabilization and progression of tethers into persistent rolling on endothelial surfaces.Comment: 9 pages, Revtex, 4 Postscript figures include

    Compressibility and probabilistic proofs

    Full text link
    We consider several examples of probabilistic existence proofs using compressibility arguments, including some results that involve Lov\'asz local lemma.Comment: Invited talk for CiE 2017 (full version

    Nodes and Arcs: Concept Map, Semiotics, and Knowledge Organization.

    Get PDF
    Purpose – The purpose of the research reported here is to improve comprehension of the socially-negotiated identity of concepts in the domain of knowledge organization. Because knowledge organization as a domain has as its focus the order of concepts, both from a theoretical perspective and from an applied perspective, it is important to understand how the domain itself understands the meaning of a concept. Design/methodology/approach – The paper provides an empirical demonstration of how the domain itself understands the meaning of a concept. The paper employs content analysis to demonstrate the ways in which concepts are portrayed in KO concept maps as signs, and they are subjected to evaluative semiotic analysis as a way to understand their meaning. The frame was the entire population of formal proceedings in knowledge organization – all proceedings of the International Society for Knowledge Organization’s international conferences (1990-2010) and those of the annual classification workshops of the Special Interest Group for Classification Research of the American Society for Information Science and Technology (SIG/CR). Findings – A total of 344 concept maps were analyzed. There was no discernible chronological pattern. Most concept maps were created by authors who were professors from the USA, Germany, France, or Canada. Roughly half were judged to contain semiotic content. Peirceian semiotics predominated, and tended to convey greater granularity and complexity in conceptual terminology. Nodes could be identified as anchors of conceptual clusters in the domain; the arcs were identifiable as verbal relationship indicators. Saussurian concept maps were more applied than theoretical; Peirceian concept maps had more theoretical content. Originality/value – The paper demonstrates important empirical evidence about the coherence of the domain of knowledge organization. Core values are conveyed across time through the concept maps in this population of conference paper

    Detecting and Characterizing Small Dense Bipartite-like Subgraphs by the Bipartiteness Ratio Measure

    Full text link
    We study the problem of finding and characterizing subgraphs with small \textit{bipartiteness ratio}. We give a bicriteria approximation algorithm \verb|SwpDB| such that if there exists a subset SS of volume at most kk and bipartiteness ratio θ\theta, then for any 0<ϵ<1/20<\epsilon<1/2, it finds a set SS' of volume at most 2k1+ϵ2k^{1+\epsilon} and bipartiteness ratio at most 4θ/ϵ4\sqrt{\theta/\epsilon}. By combining a truncation operation, we give a local algorithm \verb|LocDB|, which has asymptotically the same approximation guarantee as the algorithm \verb|SwpDB| on both the volume and bipartiteness ratio of the output set, and runs in time O(ϵ2θ2k1+ϵln3k)O(\epsilon^2\theta^{-2}k^{1+\epsilon}\ln^3k), independent of the size of the graph. Finally, we give a spectral characterization of the small dense bipartite-like subgraphs by using the kkth \textit{largest} eigenvalue of the Laplacian of the graph.Comment: 17 pages; ISAAC 201

    Bounds for graph regularity and removal lemmas

    Get PDF
    We show, for any positive integer k, that there exists a graph in which any equitable partition of its vertices into k parts has at least ck^2/\log^* k pairs of parts which are not \epsilon-regular, where c,\epsilon>0 are absolute constants. This bound is tight up to the constant c and addresses a question of Gowers on the number of irregular pairs in Szemer\'edi's regularity lemma. In order to gain some control over irregular pairs, another regularity lemma, known as the strong regularity lemma, was developed by Alon, Fischer, Krivelevich, and Szegedy. For this lemma, we prove a lower bound of wowzer-type, which is one level higher in the Ackermann hierarchy than the tower function, on the number of parts in the strong regularity lemma, essentially matching the upper bound. On the other hand, for the induced graph removal lemma, the standard application of the strong regularity lemma, we find a different proof which yields a tower-type bound. We also discuss bounds on several related regularity lemmas, including the weak regularity lemma of Frieze and Kannan and the recently established regular approximation theorem. In particular, we show that a weak partition with approximation parameter \epsilon may require as many as 2^{\Omega(\epsilon^{-2})} parts. This is tight up to the implied constant and solves a problem studied by Lov\'asz and Szegedy.Comment: 62 page

    Fouling mechanisms in constant flux crossflow ultrafiltration

    Get PDF
    Four fouling models due to Hermia (complete pore blocking, intermediate pore blocking, cake filtration and standard pore blocking), have long been used to describe membrane filtration and fouling in constant transmembrane pressure (ΔP) operation of membranes. A few studies apply these models to constant flux dead-end filtration systems. However, these models have not been reported for constant flux crossflow filtration, despite the frequent use of this mode of membrane operation in practical applications. We report derivation of these models for constant flux crossflow filtration. Of the four models, complete pore blocking and standard pore blocking were deemed inapplicable due to contradicting assumptions and relevance, respectively. Constant flux crossflow fouling experiments of dilute latex bead suspensions and soybean oil emulsions were conducted on commercial poly (ether sulfone) flat sheet ultrafiltration membranes to explore the models’ abilities to describe such data. A model combining intermediate pore blocking and cake filtration appeared to give the best agreement with the experimental data. Below the threshold flux, both the intermediate pore blocking model and the combined model fit the data well. As permeate flux approached and passed the threshold flux, the combined model was required for accurate fits. Based on this observation, a physical interpretation of the threshold flux is proposed: the threshold flux is the flux below which cake buildup is negligible and above which cake filtration becomes the dominant fouling mechanism

    Derandomized Construction of Combinatorial Batch Codes

    Full text link
    Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes introduced by Ishai et al. in STOC 2004, abstracts the following data distribution problem: nn data items are to be replicated among mm servers in such a way that any kk of the nn data items can be retrieved by reading at most one item from each server with the total amount of storage over mm servers restricted to NN. Given parameters m,c,m, c, and kk, where cc and kk are constants, one of the challenging problems is to construct cc-uniform CBCs (CBCs where each data item is replicated among exactly cc servers) which maximizes the value of nn. In this work, we present explicit construction of cc-uniform CBCs with Ω(mc1+1k)\Omega(m^{c-1+{1 \over k}}) data items. The construction has the property that the servers are almost regular, i.e., number of data items stored in each server is in the range [ncmn2ln(4m),ncm+n2ln(4m)][{nc \over m}-\sqrt{{n\over 2}\ln (4m)}, {nc \over m}+\sqrt{{n \over 2}\ln (4m)}]. The construction is obtained through better analysis and derandomization of the randomized construction presented by Ishai et al. Analysis reveals almost regularity of the servers, an aspect that so far has not been addressed in the literature. The derandomization leads to explicit construction for a wide range of values of cc (for given mm and kk) where no other explicit construction with similar parameters, i.e., with n=Ω(mc1+1k)n = \Omega(m^{c-1+{1 \over k}}), is known. Finally, we discuss possibility of parallel derandomization of the construction
    corecore