13,669 research outputs found

    Screen Content Image Segmentation Using Sparse-Smooth Decomposition

    Full text link
    Sparse decomposition has been extensively used for different applications including signal compression and denoising and document analysis. In this paper, sparse decomposition is used for image segmentation. The proposed algorithm separates the background and foreground using a sparse-smooth decomposition technique such that the smooth and sparse components correspond to the background and foreground respectively. This algorithm is tested on several test images from HEVC test sequences and is shown to have superior performance over other methods, such as the hierarchical k-means clustering in DjVu. This segmentation algorithm can also be used for text extraction, video compression and medical image segmentation.Comment: Asilomar Conference on Signals, Systems and Computers, IEEE, 2015, (to Appear

    Parallel Rendering and Large Data Visualization

    Full text link
    We are living in the big data age: An ever increasing amount of data is being produced through data acquisition and computer simulations. While large scale analysis and simulations have received significant attention for cloud and high-performance computing, software to efficiently visualise large data sets is struggling to keep up. Visualization has proven to be an efficient tool for understanding data, in particular visual analysis is a powerful tool to gain intuitive insight into the spatial structure and relations of 3D data sets. Large-scale visualization setups are becoming ever more affordable, and high-resolution tiled display walls are in reach even for small institutions. Virtual reality has arrived in the consumer space, making it accessible to a large audience. This thesis addresses these developments by advancing the field of parallel rendering. We formalise the design of system software for large data visualization through parallel rendering, provide a reference implementation of a parallel rendering framework, introduce novel algorithms to accelerate the rendering of large amounts of data, and validate this research and development with new applications for large data visualization. Applications built using our framework enable domain scientists and large data engineers to better extract meaning from their data, making it feasible to explore more data and enabling the use of high-fidelity visualization installations to see more detail of the data.Comment: PhD thesi

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality

    Block-based Classification Method for Computer Screen Image Compression

    Get PDF
    In this paper, a high accuracy and reduced processing time block based classification method for computer screen images is presented. This method classifies blocks into five types: smooth, sparse, fuzzy, text and picture blocks. In a computer screen compression application, the choice of block compression algorithm is made based on these block types. The classification method presented has four novel features. The first novel feature is a combination of Discrete Wavelet Transform (DWT) and colour counting classification methods. Both of these methods have only been used for computer image compression in isolation in previous publications but this paper shows that combined together more accurate results are obtained overall. The second novel feature is the classification of the image blocks into five block types. The addition of the fuzzy and sparse block types make the use of optimum compression methods possible for these blocks. The third novel feature is block type prediction. The prediction algorithm is applied to a current block when the blocks on the top and the left of the current block are text blocks or smooth blocks. This new algorithm is designed to exploit the correlation of adjacent blocks and reduces the overall classification processing time by 33%. The fourth novel feature is down sampling of the pixels in each block which reduces the classification processing time by 62%. When both block prediction and down sampling are enabled, the classification time is reduced by 74% overall. The overall classification accuracy is 98.46%

    A Semantic-Based Middleware for Multimedia Collaborative Applications

    Get PDF
    The Internet growth and the performance increase of desktop computers have enabled large-scale distributed multimedia applications. They are expected to grow in demand and services and their traffic volume will dominate. Real-time delivery, scalability, heterogeneity are some requirements of these applications that have motivated a revision of the traditional Internet services, the operating systems structures, and the software systems for supporting application development. This work proposes a Java-based lightweight middleware for the development of large-scale multimedia applications. The middleware offers four services for multimedia applications. First, it provides two scalable lightweight protocols for floor control. One follows a centralized model that easily integrates with centralized resources such as a shared too], and the other is a distributed protocol targeted to distributed resources such as audio. Scalability is achieved by periodically multicasting a heartbeat that conveys state information used by clients to request the resource via temporary TCP connections. Second, it supports intra- and inter-stream synchronization algorithms and policies. We introduce the concept of virtual observer, which perceives the session as being in the same room with a sender. We avoid the need for globally synchronized clocks by introducing the concept of user\u27s multimedia presence, which defines a new manner for combining streams coming from multiple sites. It includes a novel algorithm for estimation and removal of clock skew. In addition, it supports event-driven asynchronous message reception, quality of service measures, and traffic rate control. Finally, the middleware provides support for data sharing via a resilient and scalable protocol for transmission of images that can dynamically change in content and size. The effectiveness of the middleware components is shown with the implementation of Odust, a prototypical sharing tool application built on top of the middleware

    Constructive 3D Visualization techniques on Mobile platform- Empirical Analysis

    Get PDF
    As per the concept of 3D visualization on mobile devices it is clear that it belongs to two approaches i.e. local and remote approach. According to the technological advances in mobile devices it is possible to handle some complex data locally and visualized it. But still it is a challenging task to manage real entities on mobile devices locally. Remote visualization plays a vital role for 3D visualization on mobile platform in which data comes from server. Remote approach for 3D visualization on mobile platform consist of various techniques, critical analysis of such techniques is focus into this paper. Also the main focus is on network aspects

    Air Force research in optical processing

    Get PDF
    Optical and optical electronic hybrid processing especially in the application area of image processing are emphasized. Real time pattern recognition processors for such airborne missions as target recognition, tracking, and terminal guidance are studied
    • …
    corecore