1,121 research outputs found

    Dark Energy and Dark Matter Interaction: Kernels of Volterra Type and Coincidence Problem

    Get PDF
    We study a new exactly solvable model of coupling of the Dark Energy and Dark Matter, in the framework of which the kernel of non-gravitational interaction is presented by the integral Volterra-type operator well-known in the classical theory of fading memory. Exact solutions of this isotropic homogeneous cosmological model were classified with respect to the sign of the discriminant of the cubic characteristic polynomial associated with the key equation of the model. Energy-density scalars of the Dark Energy and Dark Matter, the Hubble function and acceleration parameter are presented explicitly; the scale factor is found in quadratures. Asymptotic analysis of the exact solutions has shown that the Big Rip, Little Rip, Pseudo Rip regimes can be realized with the specific choice of guiding parameters of the model. We show that the Coincidence problem can be solved if we consider the memory effect associated with the interactions in the Dark Sector of the Universe.Comment: 15 pages, 0 figures, Invited paper for the Special Issue "Cosmological Inflation, Dark Matter and Dark Energy" of the Journal Symmetry (MDPI), Special Issue Editor: Kazuharu Bamb

    Relativistic Neutron Stars: Rheological Type Extensions of the Equations of State

    Full text link
    Based on the Rheological Paradigm, one has extended the equations of state for relativistic spherically symmetric static neutron stars, taking into consideration the derivative of the matter pressure along the so-called director four-vector. The modified equations of state are applied to the model of a zero-temperature neutron condensate. This model includes one new parameter with the dimensionality of length, which describes the rheological type screening inside the neutron star. As an illustration of the new approach, one has considered the rheological type generalization of the non-relativistic Lane-Emden theory and found the numerical profiles of the pressure for a number of values of the new guiding parameter. One has found that the rheological type self-interaction makes the neutron star more compact, since the radius of the star, related to the first null of the pressure profile, decreases when the modulus of the rheological type guiding parameter grows.Comment: 14 pages, 1 figure, 1 tabl

    Practical approaches to principal component analysis in the presence of missing values

    Get PDF
    Principal component analysis (PCA) is a classical data analysis technique that finds linear transformations of data that retain maximal amount of variance. We study a case where some of the data values are missing, and show that this problem has many features which are usually associated with nonlinear models, such as overfitting and bad locally optimal solutions. Probabilistic formulation of PCA provides a good foundation for handling missing values, and we introduce formulas for doing that. In case of high dimensional and very sparse data, overfitting becomes a severe problem and traditional algorithms for PCA are very slow. We introduce a novel fast algorithm and extend it to variational Bayesian learning. Different versions of PCA are compared in artificial experiments, demonstrating the effects of regularization and modeling of posterior variance. The scalability of the proposed algorithm is demonstrated by applying it to the Netflix problem

    Many-photon scattering and entangling in a waveguide with a {\Lambda}-type atom

    Full text link
    We develop the analytical theory that describes simultaneous transmission of several photons through a waveguide coupled to a Λ\Lambda-type atom. We show that after transmission of a short few-photon pulse, the final state of the atom and all the photons is a genuine multipartite entangled state belonging to the W class. The parameters of the input pulse are optimized to maximize the efficiency of three- and four-partite W-state production.Comment: 9 pages, 5 figure

    Hierarchical Imitation Learning with Vector Quantized Models

    Full text link
    The ability to plan actions on multiple levels of abstraction enables intelligent agents to solve complex tasks effectively. However, learning the models for both low and high-level planning from demonstrations has proven challenging, especially with higher-dimensional inputs. To address this issue, we propose to use reinforcement learning to identify subgoals in expert trajectories by associating the magnitude of the rewards with the predictability of low-level actions given the state and the chosen subgoal. We build a vector-quantized generative model for the identified subgoals to perform subgoal-level planning. In experiments, the algorithm excels at solving complex, long-horizon decision-making problems outperforming state-of-the-art. Because of its ability to plan, our algorithm can find better trajectories than the ones in the training setComment: To appear at ICML 202

    New Antituberculosis Drug FS-1

    Get PDF
    The new iodine complex (FS-1), including molecular iodine, which is coordinated by lithium, magnesium halides, and bioorganic ligands, possesses high bactericidal activity against various microorganisms, including Mycobacterium sp., Staphylococcus aureus MRSA and MSSA, Escherichia coli, Pseudomonas aeruginosa, etc. FS-1 has synergistic properties with a broad class of antibiotics. The experimental model of tuberculosis in guinea pigs caused by clinical multidrug-resistant strains of Mycobacterium tuberculosis shows antituberculosis, immunomodulatory, and anti-inflammatory activity. FS-1 is characterized by low acute toxicity and lack of genotoxicity and mutagenicity. FS-1 is well distributed to organs and tissues; its pharmacokinetics is linear. The maximum nontoxic dose is 100 mg/kg for rats after 28-day oral administration and 30 mg/kg for rabbits after 180-day oral administration

    Hybrid Search for Efficient Planning with Completeness Guarantees

    Full text link
    Solving complex planning problems has been a long-standing challenge in computer science. Learning-based subgoal search methods have shown promise in tackling these problems, but they often suffer from a lack of completeness guarantees, meaning that they may fail to find a solution even if one exists. In this paper, we propose an efficient approach to augment a subgoal search method to achieve completeness in discrete action spaces. Specifically, we augment the high-level search with low-level actions to execute a multi-level (hybrid) search, which we call complete subgoal search. This solution achieves the best of both worlds: the practical efficiency of high-level search and the completeness of low-level search. We apply the proposed search method to a recently proposed subgoal search algorithm and evaluate the algorithm trained on offline data on complex planning problems. We demonstrate that our complete subgoal search not only guarantees completeness but can even improve performance in terms of search expansions for instances that the high-level could solve without low-level augmentations. Our approach makes it possible to apply subgoal-level planning for systems where completeness is a critical requirement.Comment: NeurIPS 2023 Poste

    Advanced source separation methods with applications to spatio-temporal datasets

    Get PDF
    Latent variable models are useful tools for statistical data analysis in many applications. Examples of popular models include factor analysis, state-space models and independent component analysis. These types of models can be used for solving the source separation problem in which the latent variables should have a meaningful interpretation and represent the actual sources generating data. Source separation methods is the main focus of this work. Bayesian statistical theory provides a principled way to learn latent variable models and therefore to solve the source separation problem. The first part of this work studies variational Bayesian methods and their application to different latent variable models. The properties of variational Bayesian methods are investigated both theoretically and experimentally using linear source separation models. A new nonlinear factor analysis model which restricts the generative mapping to the practically important case of post-nonlinear mixtures is presented. The variational Bayesian approach to learning nonlinear state-space models is studied as well. This method is applied to the practical problem of detecting changes in the dynamics of complex nonlinear processes. The main drawback of Bayesian methods is their high computational burden. This complicates their use for exploratory data analysis in which observed data regularities often suggest what kind of models could be tried. Therefore, the second part of this work proposes several faster source separation algorithms implemented in a common algorithmic framework. The proposed approaches separate the sources by analyzing their spectral contents, decoupling their dynamic models or by optimizing their prominent variance structures. These algorithms are applied to spatio-temporal datasets containing global climate measurements from a long period of time.reviewe
    corecore