2,548 research outputs found

    Trees and Markov convexity

    Full text link
    We show that an infinite weighted tree admits a bi-Lipschitz embedding into Hilbert space if and only if it does not contain arbitrarily large complete binary trees with uniformly bounded distortion. We also introduce a new metric invariant called Markov convexity, and show how it can be used to compute the Euclidean distortion of any metric tree up to universal factors

    Measured descent: A new embedding method for finite metrics

    Full text link
    We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Frechet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any n-point metric space (X,d) embeds in Hilbert space with distortion O(sqrt{alpha_X log n}), where alpha_X is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O(sqrt{(log lambda_X) \log n}) distortion embedding, where \lambda_X is the doubling constant of X. Since \lambda_X\le n, this result recovers Bourgain's theorem, but when the metric X is, in a sense, ``low-dimensional,'' improved bounds are achieved. Our embeddings are volume-respecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volume-respecting embeddings for all 1 \leq k \leq n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted n-point planar graph embeds in l_\infty^{O(log n)} with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n)^2).Comment: 17 pages. No figures. Appeared in FOCS '04. To appeaer in Geometric & Functional Analysis. This version fixes a subtle error in Section 2.

    Calcium-rich gap transients in the remote outskirts of galaxies

    Get PDF
    From the first two seasons of the Palomar Transient Factory, we identify three peculiar transients (PTF09dav, PTF10iuv, PTF11bij) with five distinguishing characteristics: peak luminosity in the gap between novae and supernovae (M_R ≈ - 15.5 to -16.5), rapid photometric evolution (t_(rise) ≈12-15 days), large photospheric velocities (≈6000 to 11000 km s^(-1)), early spectroscopic evolution into nebular phase (≈1 to 3 months) and peculiar nebular spectra dominated by Calcium. We also culled the extensive decade-long Lick Observatory Supernova Search database and identified an additional member of this group, SN 2007ke. Our choice of photometric and spectroscopic properties was motivated by SN 2005E (Perets et al. 2010). To our surprise, as in the case of SN 2005E, all four members of this group are also clearly offset from the bulk of their host galaxy. Given the well-sampled early and late-time light curves, we derive ejecta masses in the range of 0.4--0.7 M_⊙. Spectroscopically, we find that there may be a diversity in the photospheric phase, but the commonality is in the unusual nebular spectra. Our extensive follow-up observations rule out standard thermonuclear and standard core-collapse explosions for this class of "Calcium-rich gap" transients. If the progenitor is a white dwarf, we are likely seeing a detonation of the white dwarf core and perhaps, even shock-front interaction with a previously ejected nova shell. In the less likely scenario of a massive star progenitor, a very non-standard channel specific to a low-metallicity environment needs to be invoked (e.g., ejecta fallback leading to black hole formation). Detection (or lack thereof) of a faint underlying host (dwarf galaxy, cluster) will provide a crucial and decisive diagnostic to choose between these alternatives

    Wear rate-state interactions within a multi-component system : a study of a gearbox accelerated life testing platform

    Get PDF
    The degradation process of complex multi-component systems is highly stochastic in nature. A major side effect of this complexity is that components of such systems may have unexpected reduced life and faults and failures that decrease the reliability of multi-component systems in industrial environments. In this work we provide maintenance practitioners with an explanation of the nature of some of these unpredictable events, namely the degradation interactions that take place between components. We begin by presenting a general wear model where the degradation process of a component may be dependent on the operating conditions, the component’s own state, and the state of the other components. We then present our methodology for extracting accurate health indicators from multi-component systems by means of a time-frequency domain analysis. Finally we present a multi-component system degradation analysis of experimental data generated by a gearbox accelerated life testing platform. In so doing, we demonstrate the importance of modelling the interactions between the system components by showing their effect on component lifetime reduction

    Reduction of quantum noise in optical interferometers using squeezed light

    Full text link
    We study the photon counting noise in optical interferometers used for gravitational wave detection. In order to reduce quantum noise a squeezed vacuum state is injected into the usually unused input port. Here, we specifically investigate the so called `dark port case', when the beam splitter is oriented close to 90{\deg} to the incoming laser beam, such that nearly all photons go to one output port of the interferometer, and only a small fraction of photons is seen in the other port (`dark port'). For this case it had been suggested that signal amplification is possible without concurrent noise amplification [R.Barak and Y.Ben-Aryeh, J.Opt.Soc.Am.B25(361)2008]. We show that by injection of a squeezed vacuum state into the second input port, counting noise is reduced for large values of the squeezing factor, however the signal is not amplified. Signal strength only depends on the intensity of the laser beam.Comment: 8 pages, 1 figur

    A coral-on-a-chip microfluidic platform enabling live-imaging microscopy of reef-building corals

    Get PDF
    Coral reefs, and the unique ecosystems they support, are facing severe threats by human activities and climate change. Our understanding of these threats is hampered by the lack of robust approaches for studying the micro-scale interactions between corals and their environment. Here we present an experimental platform, coral-on-a-chip, combining micropropagation and microfluidics to allow direct microscopic study of live coral polyps. The small and transparent coral micropropagates are ideally suited for live-imaging microscopy, while the microfluidic platform facilitates long-term visualization under controlled environmental conditions. We demonstrate the usefulness of this approach by imaging coral micropropagates at previously unattainable spatio-temporal resolutions, providing new insights into several micro-scale processes including coral calcification, coral-pathogen interaction and the loss of algal symbionts (coral bleaching). Coral-on-a-chip thus provides a powerful method for studying coral physiology in vivo at the micro-scale, opening new vistas in coral biology

    Microstructural parameter estimation in vivo using diffusion MRI and structured prior information.

    Get PDF
    Diffusion MRI has recently been used with detailed models to probe tissue microstructure. Much of this work has been performed ex vivo with powerful scanner hardware, to gain sensitivity to parameters such as axon radius. By contrast, performing microstructure imaging on clinical scanners is extremely challenging

    Analytical evaluation of the output variability in production systems with general Markovian structure

    Get PDF
    Performance evaluation models are used by companies to design, adapt, manage and control their production systems. In the literature, most of the effort has been dedicated to the development of efficient methodologies to estimate the first moment performance measures of production systems, such as the expected production rate, the buffer levels and the mean completion time. However, there is industrial evidence that the variability of the production output may drastically impact on the capability of managing the system operations, causing the observed system performance to be highly different from what expected. This paper presents a general methodology to analyze the variability of the output of unreliable single machines and small-scale multi-stage production systems modeled as General Markovian structure. The generality of the approach allows modeling and studying performance measures such as the variance of the cumulated output and the variance of the inter-departure time under many system configurations within a unique framework. The proposed method is based on the characterization of the autocorrelation structure of the system output. The impact of different system parameters on the output variability is investigated and characterized. Moreover, managerial actions that allow reducing the output variability are identified. The computational complexity of the method is studied on an extensive set of computer experiments. Finally, the limits of this approach while studying long multi-stage production lines are highlighted. © 2013 Springer-Verlag Berlin Heidelberg
    corecore