196 research outputs found

    Artificial Intelligence Approach to the Determination of Physical Properties of Eclipsing Binaries. I. The EBAI Project

    Full text link
    Achieving maximum scientific results from the overwhelming volume of astronomical data to be acquired over the next few decades will demand novel, fully automatic methods of data analysis. Artificial intelligence approaches hold great promise in contributing to this goal. Here we apply neural network learning technology to the specific domain of eclipsing binary (EB) stars, of which only some hundreds have been rigorously analyzed, but whose numbers will reach millions in a decade. Well-analyzed EBs are a prime source of astrophysical information whose growth rate is at present limited by the need for human interaction with each EB data-set, principally in determining a starting solution for subsequent rigorous analysis. We describe the artificial neural network (ANN) approach which is able to surmount this human bottleneck and permit EB-based astrophysical information to keep pace with future data rates. The ANN, following training on a sample of 33,235 model light curves, outputs a set of approximate model parameters (T2/T1, (R1+R2)/a, e sin(omega), e cos(omega), and sin i) for each input light curve data-set. The whole sample is processed in just a few seconds on a single 2GHz CPU. The obtained parameters can then be readily passed to sophisticated modeling engines. We also describe a novel method polyfit for pre-processing observational light curves before inputting their data to the ANN and present the results and analysis of testing the approach on synthetic data and on real data including fifty binaries from the Catalog and Atlas of Eclipsing Binaries (CALEB) database and 2580 light curves from OGLE survey data. [abridged]Comment: 52 pages, accepted to Ap

    Monetary Policy Rules and Directions of Causality: a Test for the Euro Area

    Get PDF
    Using a VAR model in first differences with quarterly data for the euro zone, the study aims to ascertain whether decisions on monetary policy can be interpreted in terms of a “monetary policy rule” with specific reference to the so-called nominal GDP targeting rule (Hall and Mankiw, 1994; McCallum, 1988; Woodford, 2012). The results obtained indicate a causal relation proceeding from deviation between the growth rates of nominal gross domestic product (GDP) and target GDP to variation in the three-month market interest rate. The same analyses do not, however, appear to confirm the existence of a significant inverse causal relation from variation in the market interest rate to deviation between the nominal and target GDP growth rates. Similar results were obtained on replacing the market interest rate with the European Central Bank refinancing interest rate. This confirmation of only one of the two directions of causality does not support an interpretation of monetary policy based on the nominal GDP targeting rule and gives rise to doubt in more general terms as to the applicability of the Taylor rule and all the conventional rules of monetary policy to the case in question. The results appear instead to be more in line with other possible approaches, such as those based on post Keynesian analyses of monetary theory and policy and more specifically the so-called solvency rule (Brancaccio and Fontana, 2013, 2015). These lines of research challenge the simplistic argument that the scope of monetary policy consists in the stabilization of inflation, real GDP, or nominal income around a “natural equilibrium” level. Rather, they suggest that central banks actually follow a more complex purpose, which is the political regulation of the financial system with particular reference to the relations between creditors and debtors and the related solvency of economic units

    Collisional kinetics of non-uniform electric field, low-pressure, direct-current discharges in H2_{2}

    Full text link
    A model of the collisional kinetics of energetic hydrogen atoms, molecules, and ions in pure H2_2 discharges is used to predict Hα_\alpha emission profiles and spatial distributions of emission from the cathode regions of low-pressure, weakly-ionized discharges for comparison with a wide variety of experiments. Positive and negative ion energy distributions are also predicted. The model developed for spatially uniform electric fields and current densities less than 10310^{-3} A/m2^2 is extended to non-uniform electric fields, current densities of 10310^{3} A/m2^2, and electric field to gas density ratios E/N=1.3E/N = 1.3 MTd at 0.002 to 5 Torr pressure. (1 Td = 102110^{-21} V m2^2 and 1 Torr = 133 Pa) The observed far-wing Doppler broadening and spatial distribution of the Hα_\alpha emission is consistent with reactions among H+^+, H2+_2^+, H3+_3^+, and HH^-H ions, fast H atoms, and fast H2_2 molecules, and with reflection, excitation, and attachment to fast H atoms at surfaces. The Hα_\alpha excitation and H^- formation occur principally by collisions of fast H, fast H2_2, and H+^+ with H2_2. Simplifications include using a one-dimensional geometry, a multi-beam transport model, and the average cathode-fall electric field. The Hα_\alpha emission is linear with current density over eight orders of magnitude. The calculated ion energy distributions agree satisfactorily with experiment for H2+_2^+ and H3+_3^+, but are only in qualitative agreement for H+^+ and H^-. The experiments successfully modeled range from short-gap, parallel-plane glow discharges to beam-like, electrostatic-confinement discharges.Comment: Submitted to Plasmas Sources Science and Technology 8/18/201

    Dichromatic state sum models for four-manifolds from pivotal functors

    Get PDF
    A family of invariants of smooth, oriented four-dimensional manifolds is defined via handle decompositions and the Kirby calculus of framed link diagrams. The invariants are parametrised by a pivotal functor from a spherical fusion category into a ribbon fusion category. A state sum formula for the invariant is constructed via the chain-mail procedure, so a large class of topological state sum models can be expressed as link invariants. Most prominently, the Crane-Yetter state sum over an arbitrary ribbon fusion category is recovered, including the nonmodular case. It is shown that the Crane-Yetter invariant for nonmodular categories is stronger than signature and Euler invariant. A special case is the four-dimensional untwisted Dijkgraaf-Witten model. Derivations of state space dimensions of TQFTs arising from the state sum model agree with recent calculations of ground state degeneracies in Walker-Wang models. Relations to different approaches to quantum gravity such as Cartan geometry and teleparallel gravity are also discussed

    Women’s Empowerment Mitigates the Negative Effects of Low Production Diversity on Maternal and Child Nutrition in Nepal

    Get PDF
    We use household survey data from Nepal to investigate relationships between women’s empowerment in agriculture and production diversity on maternal and child dietary diversity and anthropometric outcomes. Production diversity is positively associated with maternal and child dietary diversity, and weight-for-height z-scores. Women’s group membership, control over income, reduced workload, and overall empowerment are positively associated with better maternal nutrition. Control over income is positively associated with height-for-age z-scores (HAZ), and a lower gender parity gap improves children’s diets and HAZ. Women’s empowerment mitigates the negative effect of low production diversity on maternal and child dietary diversity and HAZ

    Does working memory training have to be adaptive?

    Get PDF
    This study tested the common assumption that, to be most effective, working memory (WM) training should be adaptive (i.e., task difficulty is adjusted to individual performance). Indirect evidence for this assumption stems from studies comparing adaptive training to a condition in which tasks are practiced on the easiest level of difficulty only [cf. Klingberg (Trends Cogn Sci 14:317-324, 2010)], thereby, however, confounding adaptivity and exposure to varying task difficulty. For a more direct test of this hypothesis, we randomly assigned 130 young adults to one of the three WM training procedures (adaptive, randomized, or self-selected change in training task difficulty) or to an active control group. Despite large performance increases in the trained WM tasks, we observed neither transfer to untrained structurally dissimilar WM tasks nor far transfer to reasoning. Surprisingly, neither training nor transfer effects were modulated by training procedure, indicating that exposure to varying levels of task difficulty is sufficient for inducing training gains

    The Society for Immunotherapy of Cancer statement on best practices for multiplex immunohistochemistry (IHC) and immunofluorescence (IF) staining and validation.

    Get PDF
    OBJECTIVES: The interaction between the immune system and tumor cells is an important feature for the prognosis and treatment of cancer. Multiplex immunohistochemistry (mIHC) and multiplex immunofluorescence (mIF) analyses are emerging technologies that can be used to help quantify immune cell subsets, their functional state, and their spatial arrangement within the tumor microenvironment. METHODS: The Society for Immunotherapy of Cancer (SITC) convened a task force of pathologists and laboratory leaders from academic centers as well as experts from pharmaceutical and diagnostic companies to develop best practice guidelines for the optimization and validation of mIHC/mIF assays across platforms. RESULTS: Representative outputs and the advantages and disadvantages of mIHC/mIF approaches, such as multiplexed chromogenic IHC, multiplexed immunohistochemical consecutive staining on single slide, mIF (including multispectral approaches), tissue-based mass spectrometry, and digital spatial profiling are discussed. CONCLUSIONS: mIHC/mIF technologies are becoming standard tools for biomarker studies and are likely to enter routine clinical practice in the near future. Careful assay optimization and validation will help ensure outputs are robust and comparable across laboratories as well as potentially across mIHC/mIF platforms. Quantitative image analysis of mIHC/mIF output and data management considerations will be addressed in a complementary manuscript from this task force

    The long-run behaviour of the terms of trade between primary commodities and manufactures : a panel data approach

    Get PDF
    This paper examines the Prebisch and Singer hypothesis using a panel of twenty-four commodity prices from 1900 to 2010. The modelling approach stems from the need to meet two key concerns: (i) the presence of cross-sectional dependence among commodity prices; and (ii) the identification of potential structural breaks. To address these concerns, the Hadri and Rao (Oxf Bull Econ Stat 70:245–269, 2008) test is employed. The findings suggest that all commodity prices exhibit a structural break whose location differs across series, and that support for the Prebisch and Singer hypothesis is mixed. Once the breaks are removed from the underlying series, the persistence of commodity price shocks is shorter than that obtained in other studies using alternative methodologies.info:eu-repo/semantics/publishedVersio
    corecore