1,656 research outputs found

    Large-Scale Study of Perceptual Video Quality

    Get PDF
    The great variations of videographic skills, camera designs, compression and processing protocols, and displays lead to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, often commingled distortions that are impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html

    Virtual Globes for UAV-based data integration: Sputnik GIS and Google Earth™ applications

    Get PDF
    “This is an Accepted Manuscript of an article published by Taylor & Francis in International Journal of Digital Earth on 03 May 2018, available online: https://www.tandfonline.com/doi/abs/10.1080/17538947.2018.1470205"The integration of local measurements and monitoring via global-scale Earth observations has become a new challenge in digital Earth science. The increasing accessibility and ease of use of virtual globes (VGs) represent primary advantages of this integration, and the digital Earth scientific community has adopted this technology as one of the main methods for disseminating the results of scientific studies. In this study, the best VG software for the dissemination and analysis of high-resolution UAV (Unmanned Aerial Vehicle) data is identified for global and continuous geographic scope support. The VGs Google Earth and Sputnik Geographic Information System (GIS) are selected and compared for this purpose. Google Earth is a free platform and one of the most widely used VGs, and one of its best features its ability to provide users with quality visual results. The proprietary software Sputnik GIS more closely approximates the analytical capacity of a traditional GIS and provides outstanding advantages, such as DEM overlapping and visualization for its disseminationThis work was supported by Xunta de Galicia under the Grant “Financial aid for the consolidation and structure of competitive units of investigation in the universities of the University Galician System (2016-18)” (Ref. ED431B 2016/030 and Ref. ED341D R2016/023). The authors also acknowledge support provided by “Realización de vuelos virtuales en las parcelas del proyecto Green deserts LIFE09 / ENV/ES / 000447”S

    Crowdsourced intuitive visual design feedback

    Get PDF
    For many people images are a medium preferable to text and yet, with the exception of star ratings, most formats for conventional computer mediated feedback focus on text. This thesis develops a new method of crowd feedback for designers based on images. Visual summaries are generated from a crowd’s feedback images chosen in response to a design. The summaries provide the designer with impressionistic and inspiring visual feedback. The thesis sets out the motivation for this new method, describes the development of perceptually organised image sets and a summarisation algorithm to implement it. Evaluation studies are reported which, through a mixed methods approach, provide evidence of the validity and potential of the new image-based feedback method. It is concluded that the visual feedback method would be more appealing than text for that section of the population who may be of a visual cognitive style. Indeed the evaluation studies are evidence that such users believe images are as good as text when communicating their emotional reaction about a design. Designer participants reported being inspired by the visual feedback where, comparably, they were not inspired by text. They also reported that the feedback can represent the perceived mood in their designs, and that they would be enthusiastic users of a service offering this new form of visual design feedback
    corecore