1,943 research outputs found

    Bright tripartite entanglement in triply concurrent parametric oscillation

    Get PDF
    We show that a novel optical parametric oscillator, based on concurrent χ(2)\chi^{(2)} nonlinearities, can produce, above threshold, bright output beams of macroscopic intensities which exhibit strong tripartite continuous-variable entanglement. We also show that there are {\em two} ways that the system can exhibit a new three-mode form of the Einstein-Podolsky-Rosen paradox, and calculate the extra-cavity fluctuation spectra that may be measured to verify our predictions.Comment: title change, expanded intro and discussion of experimental aspects, 1 new figure. Conclusions unaltere

    Temporal simulations and stability analyses of elastic splitter plates interacting with cylinder wake flow

    Get PDF
    Instabilities developing in a configuration constituted by an elastic plate clamped behind a rigid cylinder are analysed in this paper. The interaction between the wake flow generated by the cylinder with the elastic plate leads to self-developing vortex-induced vibrations. Depending of the stiffness of the elastic plate, the plate may oscillate about a non-deviated or a deviated mean transverse position. After having presented non-linear results computed with time-marching simulations, the instabilities are analysed in terms of a fully coupled fluid-structure eigenvalue analysis. We show that the linear stability analysis is able to predict the unstable regions, and provide a good prediction of the unstable vibration frequencies. The mean deviation is characterized by a steady divergence mode in the eigenvalue spectrum, while unstable, unsteady vortex-induced vibration modes show lock-in phenomena

    The analyticity region of the hard sphere gas. Improved bounds

    Full text link
    We find an improved estimate of the radius of analyticity of the pressure of the hard-sphere gas in dd dimensions. The estimates are determined by the volume of multidimensional regions that can be numerically computed. For d=2d=2, for instance, our estimate is about 40% larger than the classical one.Comment: 4 pages, to appear in Journal of Statistical Physic

    RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling

    Full text link
    Understanding black-box machine learning models is important towards their widespread adoption. However, developing globally interpretable models that explain the behavior of the entire model is challenging. An alternative approach is to explain black-box models through explaining individual prediction using a locally interpretable model. In this paper, we propose a novel method for locally interpretable modeling - Reinforcement Learning-based Locally Interpretable Modeling (RL-LIM). RL-LIM employs reinforcement learning to select a small number of samples and distill the black-box model prediction into a low-capacity locally interpretable model. Training is guided with a reward that is obtained directly by measuring agreement of the predictions from the locally interpretable model with the black-box model. RL-LIM near-matches the overall prediction performance of black-box models while yielding human-like interpretability, and significantly outperforms state of the art locally interpretable models in terms of overall prediction performance and fidelity.Comment: 18 pages, 7 figures, 7 table

    La destination finale des placements financiers des ménages français.

    Get PDF
    Les ménages français ont modifié la structure de leurs portefeuilles au cours des quinze dernières années au profit des contrats d’assurance-vie. Ils investissent une part croissante de leur épargne à l’étranger, via les intermédiaires financiers.Ménages, patrimoine financier, épargne, intermédiation, dépôts, crédits, titres de créance, valeurs mobilières, actions, OPCVM, sociétés d’assurance, profondeur financière, bases de détention de titres.

    Quadripartite continuous-variable entanglement via quadruply concurrent downconversion

    Get PDF
    We investigate an intra-cavity coupled down-conversion scheme to generate quadripartite entanglement using concurrently resonant nonlinearities. We verify that quadripartite entanglement is present in this system by calculating the output fluctuation spectra and then considering violations of optimized inequalities of the van Loock-Furusawa type. The entanglement characteristics both above and below the oscillation threshold are considered. We also present analytic solutions for the quadrature operators and the van Loock-Furusawa correlations in the undepleted pump approximation.Comment: 9 pages, 5 figure

    Search-Adaptor: Text Embedding Customization for Information Retrieval

    Full text link
    Text embeddings extracted by pre-trained Large Language Models (LLMs) have significant potential to improve information retrieval and search. Beyond the zero-shot setup in which they are being conventionally used, being able to take advantage of the information from the relevant query-corpus paired data has the power to further boost the LLM capabilities. In this paper, we propose a novel method, Search-Adaptor, for customizing LLMs for information retrieval in an efficient and robust way. Search-Adaptor modifies the original text embedding generated by pre-trained LLMs, and can be integrated with any LLM, including those only available via APIs. On multiple real-world English and multilingual retrieval datasets, we show consistent and significant performance benefits for Search-Adaptor -- e.g., more than 5.2% improvements over the Google Embedding APIs in nDCG@10 averaged over 13 BEIR datasets.Comment: 9 pages, 2 figure

    LANISTR: Multimodal Learning from Structured and Unstructured Data

    Full text link
    Multimodal large-scale pretraining has shown impressive performance for unstructured data including language, image, audio, and video. However, a prevalent real-world scenario involves the combination of structured data types (tabular, time-series) with unstructured data which has so far been understudied. To bridge this gap, we propose LANISTR, an attention-based framework to learn from LANguage, Image, and STRuctured data. The core of LANISTR's methodology is rooted in \textit{masking-based} training applied across both unimodal and multimodal levels. In particular, we introduce a new similarity-based multimodal masking loss that enables it to learn cross-modal relations from large-scale multimodal data with missing modalities. On two real-world datastes, MIMIC-IV (healthcare) and Amazon Product Review (retail), LANISTR demonstrates remarkable absolute improvements of 6.6\% (AUROC) and up to 14\% (accuracy) when fine-tuned on 0.1\% and 0.01\% of labeled data, respectively, compared to the state-of-the-art alternatives. Notably, these improvements are observed even in the presence of considerable missingness ratios of 35.7\% and 99.8\%, in the respective datasets
    • …
    corecore