225 research outputs found

    How to foster innovation in the social sciences? Qualitative evidence from focus group workshops at Oxford University

    Full text link
    This report addresses challenges and opportunities for innovation in the social sciences at the University of Oxford. It summarises findings from two focus group workshops with innovation experts from the University ecosystem. Experts included successful social science entrepreneurs and professional service staff from the University. The workshops focused on four different dimensions related to innovative activities and commercialisation. The findings show several challenges at the institutional and individual level, together with features of the social scientific discipline that impede more innovation in the social sciences. Based on identifying these challenges, we present potential solutions and ways forward identified in the focus group discussions to foster social science innovation. The report aims to illustrate the potential of innovation and commercialisation of social scientific research for both researchers and the university

    Image interpolation using Shearlet based iterative refinement

    Get PDF
    This paper proposes an image interpolation algorithm exploiting sparse representation for natural images. It involves three main steps: (a) obtaining an initial estimate of the high resolution image using linear methods like FIR filtering, (b) promoting sparsity in a selected dictionary through iterative thresholding, and (c) extracting high frequency information from the approximation to refine the initial estimate. For the sparse modeling, a shearlet dictionary is chosen to yield a multiscale directional representation. The proposed algorithm is compared to several state-of-the-art methods to assess its objective as well as subjective performance. Compared to the cubic spline interpolation method, an average PSNR gain of around 0.8 dB is observed over a dataset of 200 images

    A highly efficient multiplication-free binary arithmetic coder and its application in video coding. In: ICIP

    Get PDF
    ABSTRACT INTRODUCTION Arithmetic coding has attracted a growing attention in the past years. Recently developed image coding standards like JBIG-2, JPEG-LS or JPEG2000 Binary arithmetic coding is based on the principle of recursive interval subdivision that involves the following elementary multiplication operation. Suppose that an estimate of the probability p LPS of the least probable symbol (LPS) is given and that the given coding interval is represented by its lower bound (base) L and its width (range) R. Based on that settings, the given interval is subdivided into two sub-intervals: one interval of width R LPS = R × p LPS

    Dacron Patch Infection After Carotid Angioplasty. A Report of 6 Cases

    Get PDF
    ObjectiveWe describe our experience with Dacron patch infections after carotid endarterectomy (CEA).ReportFrom 633 patients undergoing carotid endarterectomy with Dacron patching, six re-presented with prosthetic infections. In 3 of the 6 cases a neck haematoma had necessitated surgical revision after the original carotid surgery. Five patients underwent interposition vein grafting and 1 vein patch angioplasty. Postoperatively, 2 patients developed a repeat infection including the 1 patient with patch angioplasty. All patients were free of infection and neurological symptoms after a maximum follow-up of 56.5 months.ConclusionFollowing the development of haemorrhage or wound complications careful clinical surveillance should be carried out after carotid reconstruction

    Region-Based Template Matching Prediction for Intra Coding

    Get PDF
    Copy prediction is a renowned category of prediction techniques in video coding where the current block is predicted by copying the samples from a similar block that is present somewhere in the already decoded stream of samples. Motion-compensated prediction, intra block copy, template matching prediction etc. are examples. While the displacement information of the similar block is transmitted to the decoder in the bit-stream in the first two approaches, it is derived at the decoder in the last one by repeating the same search algorithm which was carried out at the encoder. Region-based template matching is a recently developed prediction algorithm that is an advanced form of standard template matching. In this method, the reference area is partitioned into multiple regions and the region to be searched for the similar block(s) is conveyed to the decoder in the bit-stream. Further, its final prediction signal is a linear combination of already decoded similar blocks from the given region. It was demonstrated in previous publications that region-based template matching is capable of achieving coding efficiency improvements for intra as well as inter-picture coding with considerably less decoder complexity than conventional template matching. In this paper, a theoretical justification for region-based template matching prediction subject to experimental data is presented. Additionally, the test results of the aforementioned method on the latest H.266/Versatile Video Coding (VVC) test model (version VTM-14.0) yield an average Bjøntegaard-Delta (BD) bit-rate savings of −0.75% using all intra (AI) configuration with 130% encoder run-time and 104% decoder run-time for a particular parameter selection

    HEVC performance and complexity for 4K video

    Get PDF
    The recently finalized High-Efficiency Video Coding (HEVC) standard was jointly developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to improve the compression performance of current video coding standards by 50%. Especially when it comes to transmit high resolution video like 4K over the internet or in broadcast, the 50% bitrate reduction is essential. This paper shows that real-time decoding of 4K video with a frame-level parallel decoding approach using four desktop CPU cores is feasible

    DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

    Full text link
    The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information. Whilst some of these techniques are domain specific, many of their underlying principles are universal in that they can be adapted and applied for compressing different types of data. In this work we present DeepCABAC, a compression algorithm for deep neural networks that is based on one of the state-of-the-art video coding techniques. Concretely, it applies a Context-based Adaptive Binary Arithmetic Coder (CABAC) to the network's parameters, which was originally designed for the H.264/AVC video coding standard and became the state-of-the-art for lossless compression. Moreover, DeepCABAC employs a novel quantization scheme that minimizes the rate-distortion function while simultaneously taking the impact of quantization onto the accuracy of the network into account. Experimental results show that DeepCABAC consistently attains higher compression rates than previously proposed coding techniques for neural network compression. For instance, it is able to compress the VGG16 ImageNet model by x63.6 with no loss of accuracy, thus being able to represent the entire network with merely 8.7MB. The source code for encoding and decoding can be found at https://github.com/fraunhoferhhi/DeepCABAC
    corecore