5 research outputs found

    Tracking Errors of Exchange Traded Funds in Bursa Malaysia

    Get PDF
    This study measures the tracking errors of exchange traded funds (ETFs) listed in Bursa Malaysia. Five measures of tracking errors are estimated in this study for the seven ETFs involved. Overall, the best ETF is METFAPA with the least tracking error. The ranking of the remainder ETFs, in the ascending order of tracking error is MYETFID, METFSID, MYETFDJ, CIMC50, FBMKLCI-EA and CIMBA40 (highest tracking error). The findings in this study is expected to provide clue for passive institutional and retail investors on their selection of ETFs to mimic the portfolio of the desired underlying assets. Moreover, it is anticipated that these findings will motivate the improvement in the tracking ability of the existing ETFs, solicit more follow up studies to encourage the development of new ETFs and increase the participation of investors

    Segmentation Labels for Emergency Response Imagery from Hurricane Barry, Delta, Dorian, Florence, Isaias, Laura, Michael, Sally, Zeta, and Tropical Storm Gordon

    No full text
    The zip file here contains 1,179 pairs of human-generated segmentation labels and images from Emergency Response Imagery collected by US National Oceanic and Atmospheric Administration (NOAA) after Hurricane Barry, Delta, Dorian, Florence, Ida, Laura, Michael, Sally, Zeta, and Tropical Storm Gordon. A total of 1,054 unique images were labeled. 946 images were annotated by a single labeler. 95 images were annotated by two labelers. 11 images were annotated by three labelers. 2 images were annotated by five labelers. All authors contributed to labeling, and all labeling was done with an open-source labeling tool (Buscombe et al., 2022). All pixels in each image are labeled with one of four classes: 0 (water), 1 (bare sand), 2 (vegetation - both sparse and dense), 4 (the built environment - buildings, roads, parking lots, boats, etc.) The csv file provided here is a list of each image file name (which includes the anonymized labeler ID), the name of the image without the labeler ID, the name of the corresponding NOAA jpg, the NOAA flight name, the storm name, the latitude and longitude of the image, and a column stating if the image has been labeled multiple times. Images labeled here correspond to multiple NOAA flights &mdash; all listed in the csv file for each jpeg image. These jpeg images can be downloaded directly from NOAA (https://storms.ngs.noaa.gov/) or using Moretz et al. (2020a, 2020b). The images included in this data release correspond to original NOAA images that have been resized and then split into quadrants (using ImageMagick). The naming convention corresponds to the image quarter &mdash; the *-0.jpg is upper left, *-1.jpg is upper right, *-2.jpg is lower left, and *-3.jpg is the lower right.</span

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p &lt; .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p &lt; .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr = .002 or .014, depending on analytic approach). The median effect size for the revised protocols (r = .05) was similar to that of the RP:P protocols (r = .04) and the original RP:P replications (r = .11), and smaller than that of the original studies (r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50)
    corecore