2,190 research outputs found

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    A framework and user interface for automatic region based segmentation algorithms

    Get PDF
    In this paper we describe a framework and tool developed for running and evaluating automatic region based segmentation algorithms. The tool was designed to allow simple integration of existing and future segmentation algorithms, both single image based algorithms and those that operate on video data. Our framework supports plug-in segmenters, media decoders, and region-map codecs. We provide several sophisticated implementations of these plug-ins, including a video decoder capable of frame accurate decoding of a large variety of video formats, an image decoder which also handles a comprehensive collection of formats, and a efficient implementation of a region-map codec. The tool includes both a graphical user interface to allow users to browse, visually inspect, and evaluate the algorithm output, and a batch processing interface for segmentation of large data collections. The application allows researchers to focus more on the development and evaluation of segmentation methods, relying on the framework for encoding/decoding input and output, and the front end for visualization

    The Físchlár digital video recording, analysis, and browsing system

    Get PDF
    In digital video indexing research area an important technique is called shot boundary detection which automatically segments long video material into camera shots using content-based analysis of video. We have been working on developing various shot boundary detection and representative frame selection techniques to automatically index encoded video stream and provide the end users with video browsing/navigation feature. In this paper we describe a demonstrator digital video system that allows the user to record a TV broadcast programme to MPEG-1 file format and to easily browse and playback the file content online. The system incorporates the shot boundary detection and representative frame selection techniques we have developed and has become a full-featured digital video system that not only demonstrates any further techniques we will develop, but also obtains users’ video browsing behaviour. At the moment the system has a real-user base of about a hundred people and we are closely monitoring how they use the video browsing/navigation feature which the system provides

    Low complexity object detection with background subtraction for intelligent remote monitoring

    Get PDF

    Extraction of Projection Profile, Run-Histogram and Entropy Features Straight from Run-Length Compressed Text-Documents

    Full text link
    Document Image Analysis, like any Digital Image Analysis requires identification and extraction of proper features, which are generally extracted from uncompressed images, though in reality images are made available in compressed form for the reasons such as transmission and storage efficiency. However, this implies that the compressed image should be decompressed, which indents additional computing resources. This limitation induces the motivation to research in extracting features directly from the compressed image. In this research, we propose to extract essential features such as projection profile, run-histogram and entropy for text document analysis directly from run-length compressed text-documents. The experimentation illustrates that features are extracted directly from the compressed image without going through the stage of decompression, because of which the computing time is reduced. The feature values so extracted are exactly identical to those extracted from uncompressed images.Comment: Published by IEEE in Proceedings of ACPR-2013. arXiv admin note: text overlap with arXiv:1403.778
    corecore