8 research outputs found

    Leveraging progressive model and overfitting for efficient learned image compression

    Full text link
    Deep learning is overwhelmingly dominant in the field of computer vision and image/video processing for the last decade. However, for image and video compression, it lags behind the traditional techniques based on discrete cosine transform (DCT) and linear filters. Built on top of an autoencoder architecture, learned image compression (LIC) systems have drawn enormous attention in recent years. Nevertheless, the proposed LIC systems are still inferior to the state-of-the-art traditional techniques, for example, the Versatile Video Coding (VVC/H.266) standard, due to either their compression performance or decoding complexity. Although claimed to outperform the VVC/H.266 on a limited bit rate range, some proposed LIC systems take over 40 seconds to decode a 2K image on a GPU system. In this paper, we introduce a powerful and flexible LIC framework with multi-scale progressive (MSP) probability model and latent representation overfitting (LOF) technique. With different predefined profiles, the proposed framework can achieve various balance points between compression efficiency and computational complexity. Experiments show that the proposed framework achieves 2.5%, 1.0%, and 1.3% Bjontegaard delta bit rate (BD-rate) reduction over the VVC/H.266 standard on three benchmark datasets on a wide bit rate range. More importantly, the decoding complexity is reduced from O(n) to O(1) compared to many other LIC systems, resulting in over 20 times speedup when decoding 2K images

    Efficient Topology Coding and Payload Partitioning Techniques for Neural Network Compression (NNC) Standard

    Get PDF
    A Neural Network Compression (NNC) standard aims to define a set of coding tools for efficient compression and transmission of neural networks. This paper addresses the high-level syntax (HLS) of NNC and proposes three HLS techniques for network topology coding and payload partitioning. Our first technique provides an efficient way to code prune topology information. It removes redundancy in the bitmask and thereby improves coding efficiency by 4–‍99% over existing approaches. The second technique processes bitmasks in larger chunks instead of one bit at a time. It is shown to reduce computational complexity of NNC encoding by 63% and NNC decoding by 82%. Our third technique makes use of partial data counters to partition an NNC bitstream into uniformly sized units for more efficient data transmission. Even though the smaller partition sizes introduce some overhead, our network simulations show better throughput due to lower packet retransmission rates. To our knowledge, this the first work to address the practical implementation aspects of HLS. The proposed techniques can be seen as key enabling factors for efficient adaptation and economical deployment of the NNC standard in a plurality of next-generation industrial and academic applications.acceptedVersionPeer reviewe

    Learned Enhancement Filters for Image Coding for Machines

    Get PDF
    Machine-To-Machine (M2M) communication applications and use cases, such as object detection and instance segmentation, are becoming mainstream nowadays. As a consequence, majority of multimedia content is likely to be consumed by machines in the coming years. This opens up new challenges on efficient compression of this type of data. Two main directions are being explored in the literature, one being based on existing traditional codecs, such as the Versatile Video Coding (VVC) standard, that are optimized for human-Targeted use cases, and another based on end-To-end trained neural networks. However, traditional codecs have significant benefits in terms of interoperability, real-Time decoding, and availability of hardware implementations over end-To-end learned codecs. Therefore, in this paper, we propose learned post-processing filters that are targeted for enhancing the performance of machine vision tasks for images reconstructed by the VVC codec. The proposed enhancement filters provide significant improvements on the target tasks compared to VVC coded images. The conducted experiments show that the proposed post-processing filters provide about 45% and 49% Bjontegaard Delta Rate gains over VVC in instance segmentation and object detection tasks, respectively.acceptedVersionPeer reviewe

    Visual saliency and eye movement:modeling and applications

    No full text
    Abstract Humans are capable of narrowing their focus on the highlights of visual information in a fraction of time in order to handle enormous mass of data. Akin to human, computers should deal with a tremendous amount of visual information. To replicate such a focusing mechanism, computer vision relies on techniques that filter out redundant information. Consequently, saliency has recently been a popular subject of discussion in the computer vision community, though it is an old subject matter in the disciplines of cognitive sciences rather than computer science. The reputation of saliency techniques – particularly in the computer vision domain – is greatly due to their inexpensive and fast computation which facilitates their use in many computer vision applications, e.g., image/video compression, object recognition, tracking, etc. This study investigates visual saliency modeling, which is the transformation of an image into a salience map such that the identified conspicuousness agrees with the statistics of human eye movements. It explores the extent of image and video processing to develop saliency techniques suitable for computer vision, e.g., it adopts sparse sampling scheme and kernel density estimation to introduce a saliency measure for images. Also, it studies the role of eye movement in salience modeling. To this end, it introduces a particle filter based framework of saccade generation incorporated into a salience model. Moreover, eye movements and salience are exploited in several applications. The contributions of this study lie on the proposal of a number of salience models for image and video stimuli, a framework to incorporate a model of eye movement generation in salience modeling, and the investigation of the application of salience models and eye movements in tracking, background subtraction, scene recognition, and valence recognition.Tiivistelmä Ihmiset kykenevät kohdistamaan katseensa hetkessä näkymän keskeisiin asioihin, mikä vaatii näköjärjestelmältä valtavan suurten tietomäärien käsittelyä. Kuten ihmisen myös tietokoneen pitäisi pystyä käsittelemään vastaavasti suurta määrää visuaalista informaatiota. Tällaisen mekanismin toteuttaminen tietokonenäöllä edellyttää menetelmiä, joilla redundanttista tietoa voidaan suodattaa. Tämän vuoksi salienssista eli silmiinpistävyydestä on muodostunut viime aikoina suosittu tutkimusaihe tietotekniikassa ja erityisesti tietokonenäön tutkimusyhteisössä, vaikka sitä sinänsä on jo pitkään tutkittu kognitiivisissa tieteissä. Salienssimenetelmien tunnettavuus erityisesti tietokonenäössä johtuu pääasiassa niiden laskennallisesta tehokkuudesta, mikä taas mahdollistaa menetelmien käytön monissa tietokonenäön sovelluksissa kuten kuvan ja videon pakkaamisessa, objektin tunnistuksessa, seurannassa, etc. Tässä väitöskirjassa tutkitaan visuaalisen salienssin mallintamista, millä tarkoitetaan muunnosta kuvasta salienssikartaksi siten, että laskennallinen silmiinpistävyys vastaa ihmisen silmänliikkeistä muodostettavaa statistiikkaa. Työssä tarkastellaan keinoja, miten kuvan- ja videonkäsittelyä voidaan käyttää kehittämään salienssimenetelmiä tietokonenäön tarpeisiin. Työssä esitellään esimerkiksi harvaa näytteistystä ja ydinestimointia hyödyntävä kuvien salienssimitta. Työssä tutkitaan myös silmänliikkeiden merkitystä salienssin mallintamisen kannalta. Tätä varten esitellään partikkelisuodatusta hyödyntävä lähestymistapa sakkadien generointiin, joka voidaan liittää salienssimalliin. Lisäksi silmänliikkeitä ja salienssia hyödynnetään useissa sovelluksissa. Suoritetun tutkimuksen tieteellisiin kontribuutioihin sisältyvät useat esitetyt salienssimallit kuvasta ja videosta saatavalle herätteelle, lähestymistapa silmänliikkeiden laskennalliseen mallintamiseen ja generointiin osana salienssimallia sekä salienssimallien ja silmänliikkeiden sovellettavuuden tutkiminen visuaalisessa seurannassa, taustanvähennyksessä, näkymäanalyysissa ja valenssin tunnistuksessa

    Towards Cycle-Consistent Models for Text and Image Retrieval

    No full text
    Cross-modal retrieval has been recently becoming an hot-spot research, thanks to the development of deeply-learnable architectures. Such architectures generally learn a joint multi-modal embedding space in which text and images could be projected and compared. Here we investigate a different approach, and reformulate the problem of cross-modal retrieval as that of learning a translation between the textual and visual domain. In particular, we propose an end-to-end trainable model which can translate text into image features and vice versa, and regularizes this mapping with a cycle-consistency criterion. Preliminary experimental evaluations show promising results with respect to ordinary visual-semantic models

    Paying Attention to Descriptions Generated by Image Captioning Models

    No full text
    To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.Peer reviewe

    AI4TV 2020: 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery

    No full text
    | openaire: EC/H2020/780069/EU//MeMAD | openaire: EC/H2020/780069/EU//ReTVTechnological developments in comprehensive video understanding - detecting and identifying visual elements of a scene, combined with audio understanding (music, speech), as well as aligned with textual information such as captions, subtitles, etc. and background knowledge - have been undergoing a significant revolution during recent years. The workshop brings together experts from academia and industry in order to discuss the latest progress in artificial intelligence research in topics related to multimodal information analysis, and in particular, semantic analysis of video, audio, and textual information for smart digital TV content production, access and delivery.Non peer reviewe
    corecore