253,130 research outputs found

    Performance analysis of ISC journals using Scopus and ISC indicators

    Get PDF
    Journal performance metrics are designed to help users know the best journals in different scientific fields. In this article, some journals indexed in ISC database are compared using Scopus and ISC performance indicators. The Scopus journal analyzer uses SNIP and SJR as alternatives for IF which consider citation analysis differently. New performance indicators consider differences in citation behavior across different research fields. ISC performance indicators are a new feature added to the ISC database which results showed its efficiency in evaluating Persian scientific journals

    Examining Facebook practice : the case of New Zealand provincial rugby : a thesis presented in partial fulfilment of the requirements for the degree of Masters in Sport and Exercise at Massey University, Palmerston North, New Zealand

    Get PDF
    Social media have become a defining feature of 21st century communications. Conceived in 2004 Facebook has risen from relative obscurity to become the most visited website in the world. While social media use has grown exponentially, so too has its influence. Sport organisations were quick to capitalise on Facebook’s popularity particularly with the introduction of brand pages in 2010. The trend is no different particularly in New Zealand Rugby’s (NZR) National Provincial Championship (NPC). However recent research indicates a lack of understanding and consistency in evaluating effectiveness within the context of Facebook. Scholars have further acknowledged a need to move beyond simple metrics as measures of performance. Using a mixed method approach this case study of four NPC rugby teams investigated the understanding of effective Facebook practice. Thematic analysis of qualitative questionnaires completed by each page’s main administrator explored their understanding of effective Facebook practice. The researcher also utilised an auto-ethnographic journal to document his own experience of managing one of the participating brand pages. Page performance was also investigated through analysis of Facebook insights data to establish how it may be more accurately interpreted to inform best practice. Results reveal that administrators perceive lack of control, maintaining credibility, guaranteeing reach and resource allocation to be the most prominent challenges faced by these brand pages. Such issues provide further tensions when attempting to justify social media use and effectiveness within sport organisations. Furthermore, teams are faced with commercial obligations to post sponsor content that may negatively impact user engagement. In addition, findings suggest that contrary to popular belief, greater total network sizes do not guarantee greater reach and engagement. It is proposed that teams consider proportional measures of performance when seeking to measure Facebook performance. Holistically the research sets a platform that can be used in future studies to tangibly connect Facebook effectiveness to organisational strategy and objectives

    Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics

    Full text link
    Dozens of new models on fixation prediction are published every year and compared on open benchmarks such as MIT300 and LSUN. However, progress in the field can be difficult to judge because models are compared using a variety of inconsistent metrics. Here we show that no single saliency map can perform well under all metrics. Instead, we propose a principled approach to solve the benchmarking problem by separating the notions of saliency models, maps and metrics. Inspired by Bayesian decision theory, we define a saliency model to be a probabilistic model of fixation density prediction and a saliency map to be a metric-specific prediction derived from the model density which maximizes the expected performance on that metric given the model density. We derive these optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC, NSS, CC, SIM, KL-Div) and show that they can be computed analytically or approximated with high precision. We show that this leads to consistent rankings in all metrics and avoids the penalties of using one saliency map for all metrics. Our method allows researchers to have their model compete on many different metrics with state-of-the-art in those metrics: "good" models will perform well in all metrics.Comment: published at ECCV 201
    • …
    corecore