3 research outputs found

    Determinanty ceny złota w okresie długim

    No full text
    The aim of the paper is to characterize and assess the impact of the most important factors on gold prices in the long term, i.e. world population, investment demand, the volume of mining production, and raw material cycle. They influence both the level of supply and demand in the gold market and, consequently, its prices. To learn the force and direction of impact of these factors on the gold price is of significant importance for strategic investment. After presenting the specificity of gold as a metal and financial asset, a separate analysis of the effect of each of the foregoing factors was conducted. Depending on the character of a factor and available empirical data, the assessment of the influence of these factors on price changes used basic descriptive statistics, graphic representation, and descriptive analysis. Different aspects of impact of the factors on the price were evaluated in the paper. The analysis indicates that an increase in gold prices is highly probable in the long-run.Celem artykułu jest charakterystyka i ocena wpływu najważniejszych czynników na cenę złota w okresie długim, tj. liczby ludności świata, popytu inwestycyjnego, produkcji kopalń oraz cyklu surowcowego. Kształtują one łącznie poziom popytu oraz podaży na rynku złota i w konsekwencji jego cenę. Poznanie wpływu oraz kierunku oddziaływania tych czynników na cenę złota ma istotne znaczenie dla inwestycji strategicznych. Po przedstawieniu specyfiki złota jako metalu i aktywa finansowego przeprowadzono odrębną analizę oddziaływania każdego z wymienionych czynników na cenę. W analizie wykorzystano, w zależności od charakteru danego czynnika i dostępności danych empirycznych, podstawowe statystyki opisowe, wykresy graficzne oraz analizę opisową. Oceniano różne aspekty wpływu tych czynników na cenę. Przeprowadzona analiza wskazuje na duże prawdopodobieństwo wzrostu ceny złota w okresie długim

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)
    corecore