692 research outputs found

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    Stereoscopic image quality assessment method based on binocular combination saliency model

    Get PDF
    The objective quality assessment of stereoscopic images plays an important role in three-dimensional (3D) technologies. In this paper, we propose an effective method to evaluate the quality of stereoscopic images that are afflicted by symmetric distortions. The major technical contribution of this paper is that the binocular combination behaviours and human 3D visual saliency characteristics are both considered. In particular, a new 3D saliency map is developed, which not only greatly reduces the computational complexity by avoiding calculation of the depth information, but also assigns appropriate weights to the image contents. Experimental results indicate that the proposed metric not only significantly outperforms conventional 2D quality metrics, but also achieves higher performance than the existing 3D quality assessment models

    Perceptual modelling for 2D and 3D

    Get PDF
    Livrable D1.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D1.1 du projet

    Functional diversity enhances detection of ecosystem stability and resolution of predator-prey interactions within a multitrophic community

    Get PDF
    Habitat fragmentation and loss are principal factors that contribute to the decline of biodiversity which in turn has a negative impact on ecosystem function. There has been growing interest in understanding diversity’s role in the mechanisms behind ecosystem resilience with much attention focusing on how functional diversity, or the range of species’ ecological roles in a community, impacts ecosystem function. Under the functional insurance hypothesis, stability in ecosystems is maintained by species that perform similar functions but have asynchronous responses to disturbance. There are three proposed stability mechanisms that operate through species’ asynchronous responses: cross-scale resilience, response diversity, and density compensation. My objective in this study was to examine change in functional diversity resulting from habitat fragmentation and detect ecological stability mechanisms in a multitrophic community consisting of longhorned beetles and their beetle predators. I also considered predator-prey interactions between beetles and their insectivore avian predators at the community level. To meet my objectives, I developed new functional traits to further capture beetle species’ functional roles and new methodology for examining change in functional diversity across trophic levels. I also expanded methodology to better detect one ecological stability mechanism, cross-scale resilience. Here, cross-scale resilience was operating if species with similar function also had different landscape response trends. I also determined a new way to assess predator-prey interactions in a multitrophic community with the use of avian visual perception of beetle prey visual contrasts. This approach allowed me to directly examine changes in avian predator and beetle prey abundance. I found that prey functional diversity was more negatively impacted than predator functional diversity by habitat fragmentation. I detected two ecological stability mechanisms, cross-scale resilience and response diversity, which may have provided the beetle community greater resilience to habitat fragmentation. With respect to the interactions between avian predators and beetle prey, variations in visual contrasts of beetles moderated the degree to which abundance of birds in some functional groups impacted beetle abundance. Also a “functional link” may also be important for providing a greater resolution between the relationships between predator and prey abundance. I suggest that future studies investigate how vision-mediated predator-prey interactions may simultaneously impact the functional diversity of these trophic levels. In addition, assessing three-dimensional surfaces of functional diversity could reveal best landscapes for promoting functional diversity of ecosystem service providers in local landscapes

    Media aesthetics based multimedia storytelling.

    Get PDF
    Since the earliest of times, humans have been interested in recording their life experiences, for future reference and for storytelling purposes. This task of recording experiences --i.e., both image and video capture-- has never before in history been as easy as it is today. This is creating a digital information overload that is becoming a great concern for the people that are trying to preserve their life experiences. As high-resolution digital still and video cameras become increasingly pervasive, unprecedented amounts of multimedia, are being downloaded to personal hard drives, and also uploaded to online social networks on a daily basis. The work presented in this dissertation is a contribution in the area of multimedia organization, as well as automatic selection of media for storytelling purposes, which eases the human task of summarizing a collection of images or videos in order to be shared with other people. As opposed to some prior art in this area, we have taken an approach in which neither user generated tags nor comments --that describe the photographs, either in their local or on-line repositories-- are taken into account, and also no user interaction with the algorithms is expected. We take an image analysis approach where both the context images --e.g. images from online social networks to which the image stories are going to be uploaded--, and the collection images --i.e., the collection of images or videos that needs to be summarized into a story--, are analyzed using image processing algorithms. This allows us to extract relevant metadata that can be used in the summarization process. Multimedia-storytellers usually follow three main steps when preparing their stories: first they choose the main story characters, the main events to describe, and finally from these media sub-groups, they choose the media based on their relevance to the story as well as based on their aesthetic value. Therefore, one of the main contributions of our work has been the design of computational models --both regression based, as well as classification based-- that correlate well with human perception of the aesthetic value of images and videos. These computational aesthetics models have been integrated into automatic selection algorithms for multimedia storytelling, which are another important contribution of our work. A human centric approach has been used in all experiments where it was feasible, and also in order to assess the final summarization results, i.e., humans are always the final judges of our algorithms, either by inspecting the aesthetic quality of the media, or by inspecting the final story generated by our algorithms. We are aware that a perfect automatically generated story summary is very hard to obtain, given the many subjective factors that play a role in such a creative process; rather, the presented approach should be seen as a first step in the storytelling creative process which removes some of the ground work that would be tedious and time consuming for the user. Overall, the main contributions of this work can be capitalized in three: (1) new media aesthetics models for both images and videos that correlate with human perception, (2) new scalable multimedia collection structures that ease the process of media summarization, and finally, (3) new media selection algorithms that are optimized for multimedia storytelling purposes.Postprint (published version

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. FĂŒr die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail evaluiert

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Occlusion: Creating Disorientation, Fugue, and Apophenia in an Art Game

    Get PDF
    Occlusion is a procedurally randomized interactive art experience which uses the motifs of repetition, isolation, incongruity and mutability to develop an experience of a Folie ĂƒÆ’Ă‚ Deux: a madness shared by two. It draws from traditional video game forms, development methods, and tools to situate itself in context with games as well as other forms of interactive digital media. In this way, Occlusion approaches the making of game-like media from the art criticism perspective of Materiality, and the written work accompanying the prototype discusses critical aesthetic concerns for Occlusion both as an art experience borrowing from games and as a text that can be academically understood in relation to other practices of media making. In addition to the produced software artifact and written analysis, this thesis includes primary research in the form of four interviews with artists, authors, game makers and game critics concerning Materiality and dissociative themes in game-like media. The written work first introduces Occlusion in context with other approaches to procedural remixing, Glitch Art, net.art, and analogue and digital collage and dĂƒÆ’Ă‚Â©collage, with special attention to recontextualization and apophenia. The experience, visual, and audio design approach of Occlusion is reviewed through a discussion of explicit design choices which define generative space. Development process, release process, post-release distribution, testing, and maintenance are reviewed, and the paper concludes with a description of future work and a post- mortem discussion. Included as appendices are a full specification document, script, and transcripts of all interviews

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. FĂŒr die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. ZunĂ€chst wird eine vollstĂ€ndige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spĂ€rlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz grĂ¶ĂŸere effektive AtomgrĂ¶ĂŸen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen DisparitĂ€tskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewĂŒnschte Information direkt aus den codierten Messungen geschĂ€tzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die QualitĂ€t der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten GradientenĂ€hnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden ĂŒbertrifft. Um die verschiedenen RekonstruktionsansĂ€tze zu trainieren und zu bewerten, werden zwei DatensĂ€tze erstellt. ZunĂ€chst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfĂŒgbarer DisparitĂ€t Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger DisparitĂ€t enthĂ€lt, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die QualitĂ€t weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufĂ€llige, regulĂ€re, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. DarĂŒber hinaus werden weitere Untersuchungen durchgefĂŒhrt, zum Beispiel bezĂŒglich der AbhĂ€ngigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse ĂŒberzeugend und zeigen eine hohe RekonstruktionsqualitĂ€t. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, ĂŒbertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender DisparitĂ€tsschĂ€tzung nach dem Stand der Technik
    • 

    corecore