6,090 research outputs found

    The wild bootstrap for multilevel models

    Full text link
    In this paper we study the performance of the most popular bootstrap schemes for multilevel data. Also, we propose a modified version of the wild bootstrap procedure for hierarchical data structures. The wild bootstrap does not require homoscedasticity or assumptions on the distribution of the error processes. Hence, it is a valuable tool for robust inference in a multilevel framework. We assess the finite size performances of the schemes through a Monte Carlo study. The results show that for big sample sizes it always pays off to adopt an agnostic approach as the wild bootstrap outperforms other techniques

    When orthography is not enough: the effect of lexical stress in lexical decision.

    Get PDF
    Three lexical decision experiments were carried out in Italian, in order to verify if stress dominance (the most frequent stress type) and consistency (the proportion and number of existent words sharing orthographic ending and stress pattern) had an effect on polysyllabic word recognition. Two factors were manipulated: whether the target word carried stress on the penultimate (dominant; graNIta, seNIle 'slush, senile') or on the antepenultimate (non-dominant) syllable (MISsile, BIbita 'missile, drink'), and whether the stress neighborhood was consistent (graNIta, MISsile) or inconsistent (seNIle, BIbita) with the word\u2019s stress pattern. In Experiment 1 words were mixed with nonwords sharing the word endings, which made words and nonwords more similar to each other. In Experiment 2 words and nonwords were presented in lists blocked for stress pattern. In Experiment 3 we used a new set of nonwords, which included endings with (stress) ambiguous neighborhoods and/or with low number of neighbors, and which were overall less similar to words. In all three experiments there was an advantage for words with penultimate (dominant) stress, and no main effect of stress neighborhood. However, the dominant stress advantage decreased in Experiments 2 and 3. Finally, in Experiment 4 the same materials used in Experiment 1 were also used in a reading aloud task, showing a significant consistency effect, but no dominant stress advantage. The influence of stress information in Italian word recognition is discussed

    Multilevel Models with Stochastic Volatility for Repeated Cross-Sections: an Application to tribal Art Prices

    Get PDF
    In this paper we introduce a multilevel specification with stochastic volatility for repeated cross-sectional data. Modelling the time dynamics in repeated cross sections requires a suitable adaptation of the multilevel framework where the individuals/items are modelled at the first level whereas the time component appears at the second level. We perform maximum likelihood estimation by means of a nonlinear state space approach combined with Gauss-Legendre quadrature methods to approximate the likelihood function. We apply the model to the first database of tribal art items sold in the most important auction houses worldwide. The model allows to account properly for the heteroscedastic and autocorrelated volatility observed and has superior forecasting performance. Also, it provides valuable information on market trends and on predictability of prices that can be used by art markets stakeholders

    What Regulates Galaxy Evolution? Open Questions in Our Understanding of Galaxy Formation and Evolution

    Full text link
    In April 2013, a workshop entitled "What Regulates Galaxy Evolution" was held at the Lorentz Center. The aim of the workshop was to bring together the observational and theoretical community working on galaxy evolution, and to discuss in depth of the current problems in the subject, as well as to review the most recent observational constraints. A total of 42 astrophysicists attended the workshop. A significant fraction of the time was devoted to identifying the most interesting "open questions" in the field, and to discuss how progress can be made. This review discusses the four questions (one for each day of the workshop) that, in our opinion, were the focus of the most intense debate. We present each question in its context, and close with a discussion of what future directions should be pursued in order to make progress on these problems.Comment: 36 pages, 6 Figures, submitted to New Astronomy Review

    On the scatter in the relation between stellar mass and halo mass: random or halo formation time dependent?

    Full text link
    The empirical HOD model of Wang et al. 2006 fits, by construction, both the stellar mass function and correlation function of galaxies in the local Universe. In contrast, the semi-analytical models of De Lucia & Blazoit 2007 (DLB07) and Guo et al. 2011 (Guo11), built on the same dark matter halo merger trees than the empirical model, still have difficulties in reproducing these observational data simultaneously. We compare the relations between the stellar mass of galaxies and their host halo mass in the three models, and find that they are different. When the relations are rescaled to have the same median values and the same scatter as in Wang et al., the rescaled DLB07 model can fit both the measured galaxy stellar mass function and the correlation function measured in different galaxy stellar mass bins. In contrast, the rescaled Guo11 model still over-predicts the clustering of low-mass galaxies. This indicates that the detail of how galaxies populate the scatter in the stellar mass -- halo mass relation does play an important role in determining the correlation functions of galaxies. While the stellar mass of galaxies in the Wang et al. model depends only on halo mass and is randomly distributed within the scatter, galaxy stellar mass depends also on the halo formation time in semi-analytical models. At fixed value of infall mass, galaxies that lie above the median stellar mass -- halo mass relation reside in haloes that formed earlier, while galaxies that lie below the median relation reside in haloes that formed later. This effect is much stronger in Guo11 than in DLB07, which explains the over-clustering of low mass galaxies in Guo11. Our results illustrate that the assumption of random scatter in the relation between stellar and halo mass as employed by current HOD and abundance matching models may be problematic in case a significant assembly bias exists in the real Universe.Comment: 10 pages, 6 figures, published in MNRA

    A first thermodynamic interpretation of the technology transfer activities

    Get PDF
    In the last years new interdisciplinary approaches to economics and social science have been developed. A Thermodynamic approach to socio-economics has brought to a new interdisciplinary scientific field called econophysics. Why thermodynamic? Thermodynamic is a statistical theory for large atomic system under constraints of energy[1] and the economy can be considered a large system governed by complex rules. The present job proposes a new application, starting from econophysic, passing throughout the thermodynamic laws to interpret and to described the Technology Transfer (TT) activities. Using the definition of economy (i.e. economy[dictionary def.] = the process or system by which goods and services are produced, sold, and bought in a country or region) the TT can be considered an important sub-domain of the economy and a transversal new area of the scientific research. The TT is the process of transferring knowledge, that uses the results from the research to produce innovation and to ensure that scientific and technological developments could become accessible to a wider range of users. Starting from important Universities (MIT, Stanford, Oxford, etc) nowadays the TT is assuming a central role. It is called the third mission, together with education and research. The importance to provide new theories and tools to describe the TT activities and their behavior, has been retained fundamental to support the social rapid evolution that is involving the TT offices. The presented work uses the thermodynamic theories applying them to Technology Transfer and starting from the concept of entropy, exergy and anergy. The output analysis should become an help to make decision to improve the TT activities and a better resources employment

    Hierarchization of the Italian region on the strength of the agricultural mechanization through clustering analysis

    Get PDF
    The aim of this paper has been to study the organization of the Italian agricultural enterprises through a cluster analysis. Starting from statistical data, the Italian Regions were then classified into homogeneous groups in proportion with the size of the farms, their agricultural mechanization level and the manpower employment. The suitability of this arrangement was supported by the variability among the groups, which was greater than that within the groups. Generally each group is formed both by adjacent and non-adjacent Regions and also by Regions geographically distant. A concise but clear picture pertaining the different structure of Italian farms were was pointed out
    corecore