210 research outputs found

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Full text link
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the latter also being known as l1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix A in R^{m x n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: * A proof that linf/l1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson-Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. * A new lower bound for the number of linear measurements required to solve l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. * A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover |x|_2 +/- eps|x|_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of l1/l1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems

    Deterministic Sampling and Range Counting in Geometric Data Streams

    Get PDF
    We present memory-efficient deterministic algorithms for constructing epsilon-nets and epsilon-approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used to answer approximate online iceberg geometric queries on data streams. We use these techniques to approximate several robust statistics of geometric data streams, including Tukey depth, simplicial depth, regression depth, the Thiel-Sen estimator, and the least median of squares. Our algorithms use only a polylogarithmic amount of memory, provided the desired approximation factors are inverse-polylogarithmic. We also include a lower bound for non-iceberg geometric queries.Comment: 12 pages, 1 figur

    Dual-modality, fluorescent, PLGA encapsulated bismuth nanoparticles for molecular and cellular fluorescence imaging and computed tomography

    Get PDF
    Reports of molecular and cellular imaging using computed tomography (CT) are rapidly increasing. Many of these reports use gold nanoparticles. Bismuth has similar CT contrast properties to gold while being approximately 1000-fold less expensive. Herein we report the design, fabrication, characterization, and CT and fluorescence imaging properties of a novel, dual modality, fluorescent, polymer encapsulated bismuth nanoparticle construct for computed tomography and fluorescence imaging. We also report on cellular internalization and preliminary in vitro and in vivo toxicity effects of these constructs. 40 nm bismuth(0) nanocrystals were synthesized and encapsulated within 120 nm Poly(DL-lactic-co-glycolic acid) (PLGA) nanoparticles by oil-in-water emulsion methodologies. Coumarin-6 was co-encapsulated to impart fluorescence. High encapsulation efficiency was achieved ∼ 70% bismuth w/w. Particles were shown to internalize within cells following incubation in culture. Bismuth nanocrystals and PLGA encapsulated bismuth nanoparticles exhibited >90% and >70% degradation, respectively, within 24 hours in acidic, lysosomal environment mimicking media and both remained nearly 100% stable in cytosolic/extracellular fluid mimicking media. μCT and clinical CT imaging was performed at multiple X-ray tube voltages to measure concentration dependent attenuation rates as well as to establish the ability to detect the nanoparticles in an ex vivo biological sample. Dual fluorescence and CT imaging is demonstrated as well. In vivo toxicity studies in rats revealed neither clinically apparent side effects nor major alterations in serum chemistry and hematology parameters. Calculations on minimal detection requirements for in vivo targeted imaging using these nanoparticles are presented. Indeed, our results indicate that these nanoparticles may serve as a platform for sensitive and specific targeted molecular CT and fluorescence imaging

    Superselectors: Efficient Constructions and Applications

    Full text link
    We introduce a new combinatorial structure: the superselector. We show that superselectors subsume several important combinatorial structures used in the past few years to solve problems in group testing, compressed sensing, multi-channel conflict resolution and data security. We prove close upper and lower bounds on the size of superselectors and we provide efficient algorithms for their constructions. Albeit our bounds are very general, when they are instantiated on the combinatorial structures that are particular cases of superselectors (e.g., (p,k,n)-selectors, (d,\ell)-list-disjunct matrices, MUT_k(r)-families, FUT(k, a)-families, etc.) they match the best known bounds in terms of size of the structures (the relevant parameter in the applications). For appropriate values of parameters, our results also provide the first efficient deterministic algorithms for the construction of such structures

    Topical Ferumoxytol Nanoparticles Disrupt Biofilms and Prevent Tooth Decay in Vivo Via Intrinsic Catalytic Activity

    Get PDF
    Ferumoxytol is a nanoparticle formulation approved by the U.S. Food and Drug Administration for systemic use to treat iron deficiency. Here, we show that, in addition, ferumoxytol disrupts intractable oral biofilms and prevents tooth decay (dental caries) via intrinsic peroxidase-like activity. Ferumoxytol binds within the biofilm ultrastructure and generates free radicals from hydrogen peroxide (H2O2), causing in situ bacterial death via cell membrane disruption and extracellular polymeric substances matrix degradation. In combination with low concentrations of H2O2, ferumoxytol inhibits biofilm accumulation on natural teeth in a human-derived ex vivo biofilm model, and prevents acid damage of the mineralized tissue. Topical oral treatment with ferumoxytol and H2O2 suppresses the development of dental caries in vivo, preventing the onset of severe tooth decay (cavities) in a rodent model of the disease. Microbiome and histological analyses show no adverse effects on oral microbiota diversity, and gingival and mucosal tissues. Our results reveal a new biomedical application for ferumoxytol as topical treatment of a prevalent and costly biofilm-induced oral disease

    Fading histograms in detecting distribution and concept changes

    Get PDF
    The remarkable number of real applications under dynamic scenarios is driving a novel ability to generate and gatherinformation.Nowadays,amassiveamountofinforma- tion is generated at a high-speed rate, known as data streams. Moreover, data are collected under evolving environments. Due to memory restrictions, data must be promptly processed and discarded immediately. Therefore, dealing with evolving data streams raises two main questions: (i) how to remember discarded data? and (ii) how to forget outdated data? To main- tain an updated representation of the time-evolving data, this paper proposes fading histograms. Regarding the dynamics of nature, changes in data are detected through a windowing scheme that compares data distributions computed by the fading histograms: the adaptive cumulative windows model (ACWM). The online monitoring of the distance between data distributions is evaluated using a dissimilarity measure based on the asymmetry of the Kullback–Leibler divergence.The experimental results support the ability of fading his- tograms in providing an updated representation of data. Such property works in favor of detecting distribution changes with smaller detection delay time when compared with stan- dard histograms. With respect to the detection of concept changes, the ACWM is compared with 3 known algorithms taken from the literature, using artificial data and using pub- lic data sets, presenting better results. Furthermore, we the proposed method was extended for multidimensional and the experiments performed show the ability of the ACWM for detecting distribution changes in these settings

    Emergence of Superlattice Dirac Points in Graphene on Hexagonal Boron Nitride

    Get PDF
    The Schr\"odinger equation dictates that the propagation of nearly free electrons through a weak periodic potential results in the opening of band gaps near points of the reciprocal lattice known as Brillouin zone boundaries. However, in the case of massless Dirac fermions, it has been predicted that the chirality of the charge carriers prevents the opening of a band gap and instead new Dirac points appear in the electronic structure of the material. Graphene on hexagonal boron nitride (hBN) exhibits a rotation dependent Moir\'e pattern. In this letter, we show experimentally and theoretically that this Moir\'e pattern acts as a weak periodic potential and thereby leads to the emergence of a new set of Dirac points at an energy determined by its wavelength. The new massless Dirac fermions generated at these superlattice Dirac points are characterized by a significantly reduced Fermi velocity. The local density of states near these Dirac cones exhibits hexagonal modulations indicating an anisotropic Fermi velocity.Comment: 16 pages, 6 figure

    Fingerprints in Compressed Strings

    Get PDF
    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries. That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log log N) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(log N log l) and O(log l log log l + log log N) for SLPs and Linear SLPs, respectively. Here, l denotes the length of the LCE

    Quantum Algorithms for the Most Frequently String Search, Intersection of Two String Sequences and Sorting of Strings Problems

    Full text link
    We study algorithms for solving three problems on strings. The first one is the Most Frequently String Search Problem. The problem is the following. Assume that we have a sequence of nn strings of length kk. The problem is finding the string that occurs in the sequence most often. We propose a quantum algorithm that has a query complexity O~(nk)\tilde{O}(n \sqrt{k}). This algorithm shows speed-up comparing with the deterministic algorithm that requires Ω(nk)\Omega(nk) queries. The second one is searching intersection of two sequences of strings. All strings have the same length kk. The size of the first set is nn and the size of the second set is mm. We propose a quantum algorithm that has a query complexity O~((n+m)k)\tilde{O}((n+m) \sqrt{k}). This algorithm shows speed-up comparing with the deterministic algorithm that requires Ω((n+m)k)\Omega((n+m)k) queries. The third problem is sorting of nn strings of length kk. On the one hand, it is known that quantum algorithms cannot sort objects asymptotically faster than classical ones. On the other hand, we focus on sorting strings that are not arbitrary objects. We propose a quantum algorithm that has a query complexity O(n(logn)2k)O(n (\log n)^2 \sqrt{k}). This algorithm shows speed-up comparing with the deterministic algorithm (radix sort) that requires Ω((n+d)k)\Omega((n+d)k) queries, where dd is a size of the alphabet.Comment: THe paper was presented on TPNC 201
    corecore