2,338 research outputs found

    Voronoi-Based Compact Image Descriptors: Efficient Region-of-Interest Retrieval With VLAD and Deep-Learning-Based Descriptors

    Get PDF
    We investigate the problem of image retrieval based on visual queries when the latter comprise arbitrary regionsof- interest (ROI) rather than entire images. Our proposal is a compact image descriptor that combines the state-of-the-art in content-based descriptor extraction with a multi-level, Voronoibased spatial partitioning of each dataset image. The proposed multi-level Voronoi-based encoding uses a spatial hierarchical K-means over interest-point locations, and computes a contentbased descriptor over each cell. In order to reduce the matching complexity with minimal or no sacrifice in retrieval performance: (i) we utilize the tree structure of the spatial hierarchical Kmeans to perform a top-to-bottom pruning for local similarity maxima; (ii) we propose a new image similarity score that combines relevant information from all partition levels into a single measure for similarity; (iii) we combine our proposal with a novel and efficient approach for optimal bit allocation within quantized descriptor representations. By deriving both a Voronoi-based VLAD descriptor (termed as Fast-VVLAD) and a Voronoi-based deep convolutional neural network (CNN) descriptor (termed as Fast-VDCNN), we demonstrate that our Voronoi-based framework is agnostic to the descriptor basis, and can easily be slotted into existing frameworks. Via a range of ROI queries in two standard datasets, it is shown that the Voronoibased descriptors achieve comparable or higher mean Average Precision against conventional grid-based spatial search, while offering more than two-fold reduction in complexity. Finally, beyond ROI queries, we show that Voronoi partitioning improves the geometric invariance of compact CNN descriptors, thereby resulting in competitive performance to the current state-of-theart on whole image retrieval

    Deep perceptual preprocessing for video coding

    Get PDF
    We introduce the concept of rate-aware deep perceptual preprocessing (DPP) for video encoding. DPP makes a single pass over each input frame in order to enhance its visual quality when the video is to be compressed with any codec at any bitrate. The resulting bitstreams can be decoded and displayed at the client side without any post-processing component. DPP comprises a convolutional neural network that is trained via a composite set of loss functions that incorporates: (i) a perceptual loss based on a trained no-reference image quality assessment model, (ii) a reference-based fidelity loss expressing L1 and structural similarity aspects, (iii) a motion-based rate loss via block-based transform, quantization and entropy estimates that converts the essential components of standard hybrid video encoder designs into a trainable framework. Extensive testing using multiple quality metrics and AVC, AV1 and VVC encoders shows that DPP+encoder reduces, on average, the bitrate of the corresponding encoder by 11%. This marks the first time a server-side neural processing component achieves such savings over the state-of-the-art in video coding

    A reduced semantics for deciding trace equivalence using constraint systems

    Full text link
    Many privacy-type properties of security protocols can be modelled using trace equivalence properties in suitable process algebras. It has been shown that such properties can be decided for interesting classes of finite processes (i.e., without replication) by means of symbolic execution and constraint solving. However, this does not suffice to obtain practical tools. Current prototypes suffer from a classical combinatorial explosion problem caused by the exploration of many interleavings in the behaviour of processes. M\"odersheim et al. have tackled this problem for reachability properties using partial order reduction techniques. We revisit their work, generalize it and adapt it for equivalence checking. We obtain an optimization in the form of a reduced symbolic semantics that eliminates redundant interleavings on the fly.Comment: Accepted for publication at POST'1

    Flow cytometric DNA ploidy analysis of ovarian granulosa cell tumors

    Get PDF
    Abstract The nuclear DNA content of 50 ovarian tumors initially diagnosed as granulosa cell tumors was measured by flow cytometry using paraffin-embedded archival material. The follow-up period of the patients ranged from 4 months to 19 years. Thirty-eight tumors were diploid or near-diploid, while 5 were aneuploid. DNA profiles of 7 tumors could not be evaluated. All 50 tumors were immunohistochemically tested for expression of intermediate filament proteins vimentin and cytokeratin and epithelial membrane antigen. The cells of all but 3 tumors expressed vimentin. These 3 vimentin-negative tumors were positive for cytokeratin and epithelial membrane antigen. They were highly aneuploid and though originally diagnosed as granulosa cell tumors, most likely represent undifferentiated carcinomas. Hence, only 2 typical granulosa cell tumors were aneuploid. In addition, frozen tissue samples from 9 of 10 granulosa cell tumors showed a DNA diploid content. Our results indicate that granulosa cell tumors tend to be diploid or have only minor ploidy abnormalities which is in line with their relatively benign character. An undifferentiated carcinoma should be considered in the differential diagnosis of tumors with a high DNA index

    Neuromorphic Vision Sensing for CNN-based Action Recognition

    Get PDF
    Neuromorphic vision sensing (NVS) hardware is now gaining traction as a low-power/high-speed visual sensing technology that circumvents the limitations of conventional active pixel sensing (APS) cameras. While object detection and tracking models have been investigated in conjunction with NVS, there is currently little work on NVS for higher-level semantic tasks, such as action recognition. Contrary to recent work that considers homogeneous transfer between flow domains (optical flow to motion vectors), we propose to embed an NVS emulator into a multi-modal transfer learning framework that carries out heterogeneous transfer from optical flow to NVS. The potential of our framework is showcased by the fact that, for the first time, our NVS-based results achieve comparable action recognition performance to motion-vector or optical-flow based methods (i.e., accuracy on UCF-101 within 8.8% of I3D with optical flow), with the NVS emulator and NVS camera hardware offering 3 to 6 orders of magnitude faster frame generation (respectively) compared to standard Brox optical flow. Beyond this significant advantage, our CNN processing is found to have the lowest total GFLOP count against all competing methods (up to 7.7 times complexity saving compared to I3D with optical flow)

    Estimating the global costs of hearing loss

    Get PDF
    Objective: To estimate the global costs of hearing loss in 2019. Design: Prevalence-based costing model. Study sample: Hearing loss data from the 2019 Global Burden of Disease study. Additional non-hearing related health care costs, educational support, exclusion from the labour force in countries with full employment and societal costs posed by lost quality of life were determined. All costs were reported in 2019 purchasing power parity (PPP) adjusted international dollars. Results: Total global economic costs of hearing loss exceeded 981billion.47981 billion. 47% of costs were related to quality of life losses, with 32% due to additional costs of poor health in people with hearing loss. 57% of costs were outside of high-income countries. 6.5% of costs were for children aged 0–14. In scenario analysis a 5% reduction in prevalence of hearing loss would reduce global costs by 49 billion. Conclusion: This analysis highlights major economic consequences of not taking action to address hearing loss worldwide. Small reductions in prevalence and/or severity of hearing loss could avert substantial economic costs to society. These cost estimates can also be used to help in modelling the cost effectiveness of interventions to prevent/tackle hearing loss and strengthen the case for investment
    • …
    corecore