14,591 research outputs found

    An objective and subjective quality assessment for passive gaming video streaming

    Get PDF
    Gaming video streaming has become increasingly popular in recent times. Along with the rise and popularity of cloud gaming services and e-sports, passive gaming video streaming services such as Twitch.tv, YouTubeGaming, etc. where viewers watch the gameplay of other gamers, have seen increasing acceptance. Twitch.tv alone has over 2.2 million monthly streamers and 15 million daily active users with almost a million average concurrent users, making Twitch.tv the 4th biggest internet traffic generator, just after Netflix, YouTube and Apple. Despite the increasing importance and popularity of such live gaming video streaming services, they have until recently not caught the attention of the quality assessment research community. For the continued success of such services, it is imperative to maintain and satisfy the end user Quality of Experience (QoE), which can be measured using various Video Quality Assessment (VQA) methods. Gaming videos are synthetic and artificial in nature and have different streaming requirements as compared to traditional non-gaming content. While there exist a lot of subjective and objective studies in the field of quality assessment of Video-on-demand (VOD) streaming services, such as Netflix and YouTube, along with the design of many VQA metrics, no work has been done previously towards quality assessment of live passive gaming video streaming applications. The research work in this thesis tries to address this gap by using various subjective and objective quality assessment studies. A codec comparison using the three most popular and widely used compression standards is performed to determine their compression efficiency. Furthermore, a subjective and objective comparative study is carried out to find out the difference between gaming and non-gaming videos in terms of the trade-off between quality and data-rate after compression. This is followed by the creation of an open source gaming video dataset, which is then used for a performance evaluation study of the eight most popular VQA metrics. Different temporal pooling strategies and content based classification approaches are evaluated to assess their effect on the VQA metrics. Finally, due to the low performance of existing No-Reference (NR) VQA metrics on gaming video content, two machine learning based NR models are designed using NR features and existing NR metrics, which are shown to outperform existing NR metrics while performing on par with state-of-the-art Full-Reference (FR) VQA metrics

    New media practices in India: bridging past and future, markets and development

    Get PDF
    This article provides a review of the academic and popular literature on new media practices in India, focusing on the country’s youth's use of mobile phones and the Internet, as well as new media prosumption. One particular feature of the Indian case is the confluence of commercial exploitation of new media technologies and their application for development purposes in initiatives that aim to bring these technologies to marginalized segments of the Indian population. Technology usage in turn is shaped by the socioeconomic location of the user, especially in regards to gender and caste. The potential of new media technologies to subvert such social stratifications and associated norms has inspired much public debate, which is often carried out on the Internet, giving rise to an online public sphere. In all of the writings reviewed here, the tension surrounding new media technologies as a meeting place of the old and the new in India is paramount

    No-reference video quality estimation based on machine learning for passive gaming video streaming applications

    Get PDF
    Recent years have seen increasing growth and popularity of gaming services, both interactive and passive. While interactive gaming video streaming applications have received much attention, passive gaming video streaming, in-spite of its huge success and growth in recent years, has seen much less interest from the research community. For the continued growth of such services in the future, it is imperative that the end user gaming quality of experience (QoE) is estimated so that it can be controlled and maximized to ensure user acceptance. Previous quality assessment studies have shown not so satisfactory performance of existing No-reference (NR) video quality assessment (VQA) metrics. Also, due to the inherent nature and different requirements of gaming video streaming applications, as well as the fact that gaming videos are perceived differently from non-gaming content (as they are usually computer generated and contain artificial/synthetic content), there is a need for application specific light-weight, no-reference gaming video quality prediction models. In this paper, we present two NR machine learning based quality estimation models for gaming video streaming, NR-GVSQI and NR-GVSQE, using NR features such as bitrate, resolution, blockiness, etc. We evaluate their performance on different gaming video datasets and show that the proposed models outperform the current state-of-the-art no-reference metrics, while also reaching a prediction accuracy comparable to the best known full reference metric

    A low-complexity psychometric curve-fitting approach for the objective quality assessment of streamed game videos

    Get PDF
    The increasing popularity of video gaming competitions, the so called eSports, has contributed to the rise of a new type of end-user: the passive game video streaming (GVS) user. This user acts as a passive spectator of the gameplay rather than actively interacting with the content. This content, which is streamed over the Internet, can suffer from disturbing network and encoding impairments. Therefore, assessing the user's perceived quality, Le the Quality of Experience (QoE), in real-time becomes fundamental. For the case of natural video content, several approaches already exist that tackle the client-side real-time QoE evaluation. The intrinsically different expectations of the passive GVS user, however, call for new real-time quality models for these streaming services. Therefore, this paper presents a real-time Reduced-Reference (RR) quality assessment framework based on a low-complexity psychometric curve-fitting approach. The proposed solution selects the most relevant, low-complexity objective feature. Afterwards, the relationship between this feature and the ground-truth quality is modelled based on the psychometric perception of the human visual system (HVS). This approach is validated on a publicly available dataset of streamed game videos and is benchmarked against both subjective scores and objective models. As a side contribution, a thorough accuracy analysis of existing Objective Video Quality Metrics (OVQMs) applied to passive GVS is provided. Furthermore, this analysis has led to interesting insights on the accuracy of low-complexity client-based metrics as well as to the creation of a new Full-Reference (FR) objective metric for GVS, i.e. the Game Video Streaming Quality Metric (GVSQM)

    Refining Measures for Assessing Problematic/Addictive Digital Gaming Use in Clinical and Research Settings

    Get PDF
    Problematic or addictive digital gaming (including all types of electronic devices) can and has had extremely adverse impacts on the lives of many individuals across the world. The understanding of this phenomenon, and the effectiveness of treatment design and monitoring, can be improved considerably by continuing refinement of assessment tools. The present article briefly overviews tools designed to measure problematic or addictive use of digital gaming, the vast majority of which are founded on the Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria for other addictive disorders, such as pathological gambling. Although adapting DSM content and strategies for measuring problematic digital gaming has proven valuable, there are some potential issues with this approach. We discuss the strengths and limitations of current methods for measuring problematic or addictive gaming and provide various recommendations that might help in enhancing or supplementing existing tools, or in developing new and even more effective tools

    What is the evidence for harm minimisation measures in gambling venues?

    Get PDF
    What techniques are currently being used for electronic gambling machine harm minimisation, and do they work? Overview The use of electronic gambling machines (EGMs) in Australia and New Zealand constitutes the largest sector of the gambling industry. The costs arising from the harms of gambling detract significantly from its benefits, and in all Australian jurisdictions various policy measures have been implemented to reduce these harms. If successful, these would maximise the net benefits associated with EGM gambling. This article reviews the available evidence for a range of these practices, particularly those implemented within EGM venues via ‘codes of practice’. These codes of practice are intended to give effect to the principles of ‘responsible gambling’ within EGM venues. These measures are: self-exclusion, signage, messages, interaction with gamblers, the removal of ATMs from gambling venues, and ‘responsible gambling’ assessed overall in a venue context. In addition, we review the evidence in support of two major recommendations of the Productivity Commission’s 2010 report into gambling, pre-commitment and one-dollar maximum wagers. We conclude that there is a modest level of evidence supporting some measures, notably self-exclusion and, to a greater extent, the removal of ATMs. There is also some evidence that ‘responsible gambling’ measures have, collectively, reduced the harms associated with gambling. However, there is limited evidence available to confirm the effectiveness of most individual ‘responsible gambling’ measures actually implemented in venues. Further, policy measures implemented outside the control of venues (such as ATM removal, reduction in bet limits, and the prohibition of smoking) appear to be associated with more significant effects, based on analysis of EGM revenue data in Victoria. The evidence for prospective measures is necessarily limited since the ultimate test is post-implementation efficacy, but there is growing evidence to suggest that pre-commitment, one-dollar maximum bets or other machine design changes may yield significantly more effective harm minimisation effects than in-venue practices such as signage or, indeed, self-exclusion. In considering evidence about the effects of existing or prospective measures it is important to emphasise that packages of measures might be more effective than single ones, and that an inability to confirm a statistically significant effect does not mean that no effect exists. Evidence Base, issue 2, 201

    The pros and cons of the use of altmetrics in research assessment

    Get PDF
    © 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based indicators in support of research assessments. These indicators, often called altmetrics, are available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an article is published, thus providing earlier impact evidence. Their disadvantages often include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact. Despite these limitations, altmetrics have been widely adopted by publishers, apparently to give authors, editors and readers insights into the level of interest in recently published articles. This article summarises evidence for and against extending the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can play a role in some other contexts. They can be informative when evaluating research units that rarely produce journal articles, when seeking to identify evidence of novel types of impact during institutional or other self-evaluations, and when selected by individuals or groups to support narrative-based non-academic claims. In addition, Mendeley reader counts are uniquely valuable as early (mainly) scholarly impact indicators to replace citations when gaming is not possible and early impact evidence is needed. Organisations using alternative indicators need recruit or develop in-house expertise to ensure that they are not misused, however
    corecore