128,174 research outputs found

    Predicting and Optimizing Image Compression

    Get PDF
    Image compression is a core task for mobile devices, social media and cloud storage backend services. Key evaluation criteria for compression are: the quality of the output, the compression ratio achieved and the computational time (and energy) expended. Predicting the effectiveness of standard compression implementations like libjpeg and WebP on a novel image is challenging, and often leads to non-optimal compression. This paper presents a machine learning-based technique to accurately model the outcome of image compression for arbitrary new images in terms of quality and compression ratio, without requiring significant additional computational time and energy. Using this model, we can actively adapt the aggressiveness of compression on a per image basis to accurately fit user requirements, leading to a more optimal compression.Postprin

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Systematic review and meta-analysis of the diagnostic accuracy of ultrasonography for deep vein thrombosis

    Get PDF
    Background Ultrasound (US) has largely replaced contrast venography as the definitive diagnostic test for deep vein thrombosis (DVT). We aimed to derive a definitive estimate of the diagnostic accuracy of US for clinically suspected DVT and identify study-level factors that might predict accuracy. Methods We undertook a systematic review, meta-analysis and meta-regression of diagnostic cohort studies that compared US to contrast venography in patients with suspected DVT. We searched Medline, EMBASE, CINAHL, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register, Database of Reviews of Effectiveness, the ACP Journal Club, and citation lists (1966 to April 2004). Random effects meta-analysis was used to derive pooled estimates of sensitivity and specificity. Random effects meta-regression was used to identify study-level covariates that predicted diagnostic performance. Results We identified 100 cohorts comparing US to venography in patients with suspected DVT. Overall sensitivity for proximal DVT (95% confidence interval) was 94.2% (93.2 to 95.0), for distal DVT was 63.5% (59.8 to 67.0), and specificity was 93.8% (93.1 to 94.4). Duplex US had pooled sensitivity of 96.5% (95.1 to 97.6) for proximal DVT, 71.2% (64.6 to 77.2) for distal DVT and specificity of 94.0% (92.8 to 95.1). Triplex US had pooled sensitivity of 96.4% (94.4 to 97.1%) for proximal DVT, 75.2% (67.7 to 81.6) for distal DVT and specificity of 94.3% (92.5 to 95.8). Compression US alone had pooled sensitivity of 93.8 % (92.0 to 95.3%) for proximal DVT, 56.8% (49.0 to 66.4) for distal DVT and specificity of 97.8% (97.0 to 98.4). Sensitivity was higher in more recently published studies and in cohorts with higher prevalence of DVT and more proximal DVT, and was lower in cohorts that reported interpretation by a radiologist. Specificity was higher in cohorts that excluded patients with previous DVT. No studies were identified that compared repeat US to venography in all patients. Repeat US appears to have a positive yield of 1.3%, with 89% of these being confirmed by venography. Conclusion Combined colour-doppler US techniques have optimal sensitivity, while compression US has optimal specificity for DVT. However, all estimates are subject to substantial unexplained heterogeneity. The role of repeat scanning is very uncertain and based upon limited data

    An automatic technique for visual quality classification for MPEG-1 video

    Get PDF
    The Centre for Digital Video Processing at Dublin City University developed Fischlar [1], a web-based system for recording, analysis, browsing and playback of digitally captured television programs. One major issue for Fischlar is the automatic evaluation of video quality in order to avoid processing and storage of corrupted data. In this paper we propose an automatic classification technique that detects the video content quality in order to provide a decision criterion for the processing and storage stages

    The Cerevoice Blizzard Entry 2007: Are Small Database Errors Worse than Compression Artifacts?

    Get PDF
    In commercial systems the memory footprint of unit selection systems is often a key issue. This is especially true for PDAs and other embedded devices. In this years Blizzard entry CereProc R○gave itself the criteria that the full database system entered would have a smaller memory footprint than either of the two smaller database entries. This was accomplished by applying speex speech compression to the full database entry. In turn a set of small database techniques used to improve the quality of small database systems in last years entry were extended. Finally, for all systems, two quality control methods were applied to the underlying database to improve the lexicon and transcription match to the underlying data. Results suggest that mild audio quality artifacts introduced by lossy compression have almost as much impact on MOS perceived quality as concatenation errors introduced by sparse data in the smaller systems with bulked diphones. Index Terms: speech synthesis, unit selection. 1

    A New Method For Digital Watermarking Based on Combination of DCT and PCA

    Full text link
    In the digital watermarking with DCT method,the watermark is located within a range of DCT coefficients of the cover image. In this paper to use the low-frequency band, a new method is proposed by using a combination of the DCT and PCA transform. The proposed method is compared to other DCT methods, our method is robust and keeps the quality of cover image, also increases capacity of the watermarking.Comment: Telecommunications Forum Telfor (TELFOR), 2014 22n

    Evolutionary strategy based improved motion estimation technique for H.264 video coding

    Get PDF
    In this paper we propose an improved motion estimation algorithm based on evolutionary strategy (ES) for H.264 video codec applied to video. The proposed technique works in a parallel local search for macroblocks. For this purpose (mu+lambda) ES is used with an initial population of heuristically and randomly generated motion vectors. Experimental results show that the proposed scheme can reduce the computational complexity up to 50% of the motion estimation algorithm used in the H.264 reference codec at the same picture quality. Therefore, the proposed algorithm provides a significant improvement in motion estimation in the H.264 video codec

    Micro computed tomography based finite element models of calcium phosphate scaffolds for bone tissue engineering

    Get PDF
    Bone is a living tissue that is able to regenerate by itself. However, when severe bone defects occur, the natural regeneration may be impaired. In these cases, bone graft substitutes can be used to induce the natural healing process. As a scaffold for tissue engineering, these bone graft substitutes have to meet specific requirements. Among others, the material must be biocompatible, biodegradable and have a porous structure to allow vascularization, cell migration and formation of new bone. Additionally, the mechanical properties of the scaffold have to resemble the ones of native tissue. The goal of this project is to create a computational model of the calcium phosphate scaffolds that are produced by rapid-prototyping by the Biomaterials, Biomechanics, and Tissue Engineering group at the Technical University of Catalonia. These models are based on finite element analysis and micro computed tomography images in order to consider the actual architecture of the scaffolds. The generated FE-models allow the computation of both local strains, which act as mechanical stimuli on attached cells, as well as the behaviour of the entire scaffold. When considering this information, the scaffold can be optimized for tissue differentiation by tuning both the scaffold architecture and the scaffold material bulk properties.Incomin
    corecore