1,808 research outputs found

    Multidimensional model of estimated resource usage for multimedia NoC QoS

    Get PDF
    Multiprocessor systems are rapidly entering various high-performance computing segments, like multimedia processing. Instead of an increase in processor clock frequency, the new trend is enabling multiple cores in performing processing, e.g. dual or quadrapule CPUs in one subsystem. In this contribution, we address the problem of modeling the resource requirements of multimedia applications for a distributed computation on a multiprocessor system. This paper shows that the estimation of resource requirements based on input data enables the dynamic activation of tasks and run-time redistribution of application tasks. We also formally specify the optimal selection of the co-executed application with aim to provide the most optimal end-results of such streaming applications within one networks-on-chip (NoC) system. We present a new concept for system optimization which involves the major system parameters and resource usage. Experimental results are based on mapping an arbitrary-shaped MPEG-4 video decoder onto a multiprocessor NoC

    Hybrid compression of video with graphics in DTV communication systems

    Get PDF
    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video communication chain between content provider and broadcaster and locally, in the DTV receiver, proprietary video-graphics compression schemes can be used to enable more efficient transmission/storage of mixed video and graphics signals. For example, in the DTV receiver case this will lead to a significant memory-cost reduction. To preserve a high overall image quality, the video and graphics data require independent coding systems, matched with their specific visual and statistical properties. We introduce various efficient algorithms that support both the lossless (contour, runlength and arithmetic coding) and the lossy (block predictive coding) compression of graphics data. If the graphics data are a-priori mixed with video and the graphics position is unknown at compression time, an accurate detection mechanism is applied to distinguish the two signals, such that independent coding algorithms can be employed for each data-type. In the DTV memory-reduction scenario, an overall bit-rate control completes the system, ensuring a fixed compression factor of 2-3 per frame without sacrificing the quality of the graphic

    Grain size effect on the switching current in soft ferroelectric lead zirconate titanate

    Get PDF
    Recently, we reported on the appearance of a double peak in the switching current during the reverse poling. In the present paper, the switching current measurements have been carried out on a soft lead zirconate titanate as a function of grain size. While in small grains only a small single switching peak is observed, large grains, however, showed double peak switching, as commonly observed in this material. The pyroelectric coefficient curves show a consistent trend with the switching curves. This behavior is attributed to non-180° domain switching during the reversed poling case as a result of residual stresses developed during forward poling. ©2007 American Institute of Physic

    The scaling dimension of low lying Dirac eigenmodes and of the topological charge density

    Full text link
    As a quantitative measure of localization, the inverse participation ratio of low lying Dirac eigenmodes and topological charge density is calculated on quenched lattices over a wide range of lattice spacings and volumes. Since different topological objects (instantons, vortices, monopoles, and artifacts) have different co-dimension, scaling analysis provides information on the amount of each present and their correlation with the localization of low lying eigenmodes.Comment: Lattice2004(topology), Fermilab, June 21 - 26, 2004; 3 pages, 3 figure

    Формування теоретичної моделі геополітичного дискурсу у вітчизняній політичній думці кінця ХХ – початку ХХІ століття

    Get PDF
    У статті висвітлюються питання щодо започаткування новітньої дослідницької традиції геополітичного дискурсу у проблематиці вітчизняної політичної думки ХХ – початку ХХІ століття. Зазначено позиції провідних вітчизняних вчених щодо формування емпіричного та ідейно-теоретичного підґрунтя для утвердження цієї традиції політичного дослідження.The article considers the questions of the becoming of a new research tradition of geopolitical discourse in the topic of native political thought of the 20-th – the beginning of the 21-st century. The views of leading home scientists about the development of empirical, ideological and theoretical basis for the maintenance of this tradition of political research are pointed out

    Triplet network for classification of benign and pre-malignant polyps

    Get PDF
    Colorectal polyps are critical indicators of colorectal cancer (CRC). Classification of polyps during colonoscopy is still a challenge for which many medical experts have come up with visual models, albeit with limited success. An early detection of CRC prevents further complications in the colon, which makes identification of abnormal tissue a crucial step during routinary colonoscopy. In this paper, a classification approach is proposed to differentiate between benign and pre-malignant polyps using features learned from a Triplet Network architecture. The study includes a total of 154 patients, with 203 different polyps. For each polyp an image is acquired with White Light (WL), and additionally with two recent endoscopic modalities:Blue Laser Imaging (BLI) and Linked Color Imaging (LCI). The network is trained with the associated triplet loss, allowing the learning of non-linear features, which prove to be a highly discriminative embedding, leading to excellent results with simple linear classifiers. Additionally, the acquisition of multiple polyps with WL, BLI and LCI, enables the combination of the posterior probabilities, yielding a more robust classification result. Threefold cross-validation is employed as validation method and accuracy, sensitivity, specificity and area under the curve (AUC) are computed as evaluation metrics. While our approach achieves a similar classification performance compared to state-of-the-art methods, it has a much lower inference time (from hours to seconds, on a single GPU). The increased robustness and much faster execution facilitates future advances towards patient safety and may avoid time-consuming and costly histhological assessment.</p

    Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging

    Full text link
    Data uncertainties, such as sensor noise or occlusions, can introduce irreducible ambiguities in images, which result in varying, yet plausible, semantic hypotheses. In Machine Learning, this ambiguity is commonly referred to as aleatoric uncertainty. Latent density models can be utilized to address this problem in image segmentation. The most popular approach is the Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize the conditional data log-likelihood Evidence Lower Bound. In this work, we demonstrate that the PU- Net latent space is severely inhomogenous. As a result, the effectiveness of gradient descent is inhibited and the model becomes extremely sensitive to the localization of the latent space samples, resulting in defective predictions. To address this, we present the Sinkhorn PU-Net (SPU-Net), which uses the Sinkhorn Divergence to promote homogeneity across all latent dimensions, effectively improving gradient-descent updates and model robustness. Our results show that by applying this on public datasets of various clinical segmentation problems, the SPU-Net receives up to 11% performance gains compared against preceding latent variable models for probabilistic segmentation on the Hungarian-Matched metric. The results indicate that by encouraging a homogeneous latent space, one can significantly improve latent density modeling for medical image segmentation.Comment: 12 pages incl. references, 11 figure
    corecore