1,807 research outputs found

    Possible solution of the Coriolis attenuation problem

    Get PDF
    The most consistently useful simple model for the study of odd deformed nuclei, the particle-rotor model (strong coupling limit of the core-particle coupling model) has nevertheless been beset by a long-standing problem: It is necessary in many cases to introduce an ad hoc parameter that reduces the size of the Coriolis interaction coupling the collective and single-particle motions. Of the numerous suggestions put forward for the origin of this supplementary interaction, none of those actually tested by calculations has been accepted as the solution of the problem. In this paper we seek a solution of the difficulty within the framework of a general formalism that starts from the spherical shell model and is capable of treating an arbitrary linear combination of multipole and pairing forces. With the restriction of the interaction to the familiar sum of a quadrupole multipole force and a monopole pairing force, we have previously studied a semi-microscopic version of the formalism whose framework is nevertheless more comprehensive than any previously applied to the problem. We obtained solutions for low-lying bands of several strongly deformed odd rare earth nuclei and found good agreement with experiment, except for an exaggerated staggering of levels for K=1/2 bands, which can be understood as a manifestation of the Coriolis attenuation problem. We argue that within the formalism utilized, the only way to improve the physics is to add interactions to the model Hamiltonian. We verify that by adding a magnetic dipole interaction of essentially fixed strength, we can fit the K=1/2 bands without destroying the agreement with other bands. In addition we show that our solution also fits 163Er, a classic test case of Coriolis attenuation that we had not previously studied.Comment: revtex, including 7 figures(postscript), submitted to Phys.Rev.

    Automatic Classification of Variable Stars in Catalogs with missing data

    Full text link
    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks, a probabilistic graphical model, that allows us to perform inference to pre- dict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilises sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches and at what computational cost. Integrating these catalogs with missing data we find that classification of variable objects improves by few percent and by 15% for quasar detection while keeping the computational cost the same

    Time-varying Multi-regime Models Fitting by Genetic Algorithms

    Get PDF
    Many time series exhibit both nonlinearity and nonstationarity. Though both features have often been taken into account separately, few attempts have been proposed to model them simultaneously. We consider threshold models, and present a general model allowing for different regimes both in time and in levels, where regime transitions may happen according to self-exciting, or smoothly varying, or piecewise linear threshold modeling. Since fitting such a model involves the choice of a large number of structural parameters, we propose a procedure based on genetic algorithms, evaluating models by means of a generalized identification criterion. The performance of the proposed procedure is illustrated with a simulation study and applications to some real data.Nonlinear time series; Nonstationary time series; Threshold model

    Clustering Based Feature Learning on Variable Stars

    Full text link
    The success of automatic classification of variable stars strongly depends on the lightcurve representation. Usually, lightcurves are represented as a vector of many statistical descriptors designed by astronomers called features. These descriptors commonly demand significant computational power to calculate, require substantial research effort to develop and do not guarantee good performance on the final classification task. Today, lightcurve representation is not entirely automatic; algorithms that extract lightcurve features are designed by humans and must be manually tuned up for every survey. The vast amounts of data that will be generated in future surveys like LSST mean astronomers must develop analysis pipelines that are both scalable and automated. Recently, substantial efforts have been made in the machine learning community to develop methods that prescind from expert-designed and manually tuned features for features that are automatically learned from data. In this work we present what is, to our knowledge, the first unsupervised feature learning algorithm designed for variable stars. Our method first extracts a large number of lightcurve subsequences from a given set of photometric data, which are then clustered to find common local patterns in the time series. Representatives of these patterns, called exemplars, are then used to transform lightcurves of a labeled set into a new representation that can then be used to train an automatic classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias generated when the learning process is done only with labeled data. We test our method on MACHO and OGLE datasets; the results show that the classification performance we achieve is as good and in some cases better than the performance achieved using traditional features, while the computational cost is significantly lower

    Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in Symmetric Cournot Games

    Get PDF
    We use co-evolutionary genetic algorithms to model the players' learning process in several Cournot models, and evaluate them in terms of their convergence to the Nash Equilibrium. The \social-learning" versions of the two co-evolutionary algorithms we introduce, establish Nash Equilibrium in those models, in contrast to the \individual learning" versions which, as we see here, do not imply the convergence of the players' strategies to the Nash outcome. When players use \canonical co-evolutionary genetic algorithms" as learning algorithms, the process of the game is an ergodic Markov Chain, and therefore we analyze simulation results using the relevant methodology, to find that in the \social" case, states leading to NE play are highly frequent at the stationary distribution of the chain, in contrast to the \individual learning" case, when NE is not reached at all in our simulations; to ftnd that the expected Hamming distance of the states at the limiting distribution from the \NE state" is significantly smaller in the \social" than in the \individual learning case"; to estimate the expected time that the \social" algorithms need to get to the \NE state" and verify their robustness and finally to show that a large fraction of the games played are indeed at the Nash Equilibrium.Genetic Algorithms, Cournot oligopoly, Evolutionary Game Theory, Nash Equilibrium

    Determination of sequential best replies in n-player games by Genetic Algorithms

    Get PDF
    An iterative algorithm for establishing the Nash Equilibrium in pure strategies (NE) is proposed and tested in Cournot Game models. The algorithm is based on the convergence of sequential best responses and the utilization of a genetic algorithm for determining each player's best response to a given strategy profile of its opponents. An extra outer loop is used, to address the problem of finite accuracy, which is inherent in genetic algorithms, since the set of feasible values in such an algorithm is finite. The algorithm is tested in five Cournot models, three of which have convergent best replies sequence, one with divergent sequential best replies and one with \local NE traps"(Son and Baldick 2004), where classical local search algorithms fail to identify the Nash Equilibrium. After a series of simulations, we conclude that the algorithm proposed converges to the Nash Equilibrium, with any level of accuracy needed, in all but the case where the sequential best replies process diverges.Genetic Algorithms, Cournot oligopoly, Best Response, Nash Equilibrium

    Perturbative study of multiphoton processes in the tunneling regime

    Get PDF
    A perturbative study of the Schr\"{o}dinger equation in a strong electromagnetic field with dipole approximation is accomplished in the Kramers-Henneberger frame. A prove that just odd harmonics appear in the spectrum for a linear polarized laser field is given, assuming that the atomic radius is much lesser than the free-electron quiver motion amplitude. Within this approximation a perturbation series is obtained in the Keldysh parameter giving a description of multiphoton processes in the tunneling regime. The theory is applied to the case of hydrogen-like atoms: The spectrum of higher order harmonics and the above-threshold ionization rate are derived. The ionization rate computed in this way determines the amplitudes of the harmonics. The wave function of the atom proves to be rigid with respect to the perturbation so that the effect of the laser field on the Coulomb potential in the computation of the probability amplitudes can be neglected as a first approximation: This approximation improves as the ratio between the amplitude of the quiver motion of the electron and the atom radius becomes larger. The semiclassical description currently adopted for harmonic generation is so rederived by solving perturbatively the Schr\"{o}dinger equation.Comment: Latex, 11 pages. To appear on Phys. Lett.

    An Algorithm for the Visualization of Relevant Patterns in Astronomical Light Curves

    Full text link
    Within the last years, the classification of variable stars with Machine Learning has become a mainstream area of research. Recently, visualization of time series is attracting more attention in data science as a tool to visually help scientists to recognize significant patterns in complex dynamics. Within the Machine Learning literature, dictionary-based methods have been widely used to encode relevant parts of image data. These methods intrinsically assign a degree of importance to patches in pictures, according to their contribution in the image reconstruction. Inspired by dictionary-based techniques, we present an approach that naturally provides the visualization of salient parts in astronomical light curves, making the analogy between image patches and relevant pieces in time series. Our approach encodes the most meaningful patterns such that we can approximately reconstruct light curves by just using the encoded information. We test our method in light curves from the OGLE-III and StarLight databases. Our results show that the proposed model delivers an automatic and intuitive visualization of relevant light curve parts, such as local peaks and drops in magnitude.Comment: Accepted 2019 January 8. Received 2019 January 8; in original form 2018 January 29. 7 pages, 6 figure
    corecore