40,790 research outputs found

    Adapting Stream Processing Framework for Video Analysis

    Get PDF
    AbstractStream processing (SP) became relevant mainly due to inexpensive and hence ubiquitous deployment of sensors in many domains (e.g., environmental monitoring, battle field monitoring). Other continuous data generators (surveillance, traffic data) have also prompted processing and analysis of these streams for applications such as traffic congestion/accidents and personalized marketing. Image processing has been researched for several decades. Recently there is emphasis on video stream analysis for situation monitoring due to the ubiquitous deployment of video cameras and unmanned aerial vehicles for security and other applications.This paper elaborates on the research and development issues that need to be addressed for extending the traditional stream processing framework for video analysis, especially for situation awareness. This entails extensions to: data model, operators and language for expressing complex situations, QoS (Quality of service) specifications and algorithms needed for their satisfaction. Specifically, this paper demonstrates inadequacy of current data representation (e.g., relation and arrable) and querying capabilities to infer long-term research and development issues

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Quality of experience driven control of interactive media stream parameters

    Get PDF
    In recent years, cloud computing has led to many new kinds of services. One of these popular services is cloud gaming, which provides the entire game experience to the users remotely from a server, but also other applications are provided in a similar manner. In this paper we focus on the option to render the application in the cloud, thereby delivering the graphical output of the application to the user as a video stream. In more general terms, an interactive media stream is set up over the network between the user's device and the cloud server. The main issue with this approach is situated at the network, that currently gives little guarantees on the quality of service in terms of parameters such as available bandwidth, latency or packet loss. However, for interactive media stream cases, the user is merely interested in the perceived quality, regardless of the underlaying network situation. In this paper, we present an adaptive control mechanism that optimizes the quality of experience for the use case of a race game, by trading off visual quality against frame rate in function of the available bandwidth. Practical experiments verify that QoE driven adaptation leads to improved user experience compared to systems solely taking network characteristics into account

    Dynamic Adaptation on Non-Stationary Visual Domains

    Full text link
    Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results

    Effect of oil palm empty fruit bunches (OPEFB) fibers to the compressive strength and water absorption of concrete

    Get PDF
    Growing popularity based on environmentally-friendly, low cost and lightweight building materials in the construction industry has led to a need to examine how these characteristics can be achieved and at the same time giving the benefit to the environment and maintain the material requirements based on the standards required. Recycling of waste generated from industrial and agricultural activities as measures of building materials is not only a viable solution to the problem of pollution but also to produce an economic design of building

    Adaptive Temporal Compressive Sensing for Video

    Full text link
    This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to real-time implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems.Comment: IEEE Interonal International Conference on Image Processing (ICIP),201
    corecore