11 research outputs found

    Deblurring by Solving a TV p

    Get PDF
    Image deblurring is formulated as an unconstrained minimization problem, and its penalty function is the sum of the error term and TVp-regularizers with 0<p<1. Although TVp-regularizer is a powerful tool that can significantly promote the sparseness of image gradients, it is neither convex nor smooth, thus making the presented optimization problem more difficult to deal with. To solve this minimization problem efficiently, such problem is first reformulated as an equivalent constrained minimization problem by introducing new variables and new constraints. Thereafter, the split Bregman method, as a solver, splits the new constrained minimization problem into subproblems. For each subproblem, the corresponding efficient method is applied to ensure the existence of closed-form solutions. In simulated experiments, the proposed algorithm and some state-of-the-art algorithms are applied to restore three types of blurred-noisy images. The restored results show that the proposed algorithm is valid for image deblurring and is found to outperform other algorithms in experiments

    A Natural Image Pointillism with Controlled Ellipse Dots

    Get PDF
    This paper presents an image-based artistic rendering algorithm for the automatic Pointillism style. At first, ellipse dot locations are randomly generated based on a source image; then dot orientations are precalculated with help of a direction map; a saliency map of the source image decides long and short radius of the ellipse dot. At last, the rendering runs layer-by-layer from large size dots to small size dots so as to reserve the detailed parts of the image. Although only ellipse dot shape is adopted, the final Pointillism style performs well because of variable characteristics of the dot

    A Simple and Robust Gray Image Encryption Scheme Using Chaotic Logistic Map and Artificial Neural Network

    Get PDF
    A robust gray image encryption scheme using chaotic logistic map and artificial neural network (ANN) is introduced. In the proposed method, an external secret key is used to derive the initial conditions for the logistic chaotic maps which are employed to generate weights and biases matrices of the multilayer perceptron (MLP). During the learning process with the backpropagation algorithm, ANN determines the weight matrix of the connections. The plain image is divided into four subimages which are used for the first diffusion stage. The subimages obtained previously are divided into the square subimage blocks. In the next stage, different initial conditions are employed to generate a key stream which will be used for permutation and diffusion of the subimage blocks. Some security analyses such as entropy analysis, statistical analysis, and key sensitivity analysis are given to demonstrate the key space of the proposed algorithm which is large enough to make brute force attacks infeasible. Computing validation using experimental data with several gray images has been carried out with detailed numerical analysis, in order to validate the high security of the proposed encryption scheme

    Reliability Model Construction for Complex System Based on Common Cause Failure Network

    Get PDF
    A new construction method of system reliability was proposed in this paper based on network and relevant failure. Taking the component units as the nodes and the interaction relationships between the nodes as the side lines, a new directional network reliability model with certain network topology characteristics was constructed. It can indicate the complex topology relationship, interaction mechanism, and the transmission mechanism of failure affect between mechanical integration and electrical integration of system components. Compared with the traditional research methods, the relevant failure was considered during this process. Through the application of the fault data in the bogie system of high-speed train, it was shown that a new network reliability model which considered the relevant failure can be constructed by the method proposed in this paper and the result can be more accurate, especially for the complex mechanical and electrical integration systems

    Object Tracking with Adaptive Multicue Incremental Visual Tracker

    Get PDF
    Generally, subspace learning based methods such as the Incremental Visual Tracker (IVT) have been shown to be quite effective for visual tracking problem. However, it may fail to follow the target when it undergoes drastic pose or illumination changes. In this work, we present a novel tracker to enhance the IVT algorithm by employing a multicue based adaptive appearance model. First, we carry out the integration of cues both in feature space and in geometric space. Second, the integration directly depends on the dynamically-changing reliabilities of visual cues. These two aspects of our method allow the tracker to easily adapt itself to the changes in the context and accordingly improve the tracking accuracy by resolving the ambiguities. Experimental results demonstrate that subspace-based tracking is strongly improved by exploiting the multiple cues through the proposed algorithm

    Video Pulses: User-Based Modeling of Interesting Video Segments

    Get PDF
    We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web

    Text Extraction from Historical Document Images by the Combination of Several Thresholding Techniques

    Get PDF
    This paper presents a new technique for the binarization of historical document images characterized by deteriorations and damages making their automatic processing difficult at several levels. The proposed method is based on hybrid thresholding combining the advantages of global and local methods and on the mixture of several binarization techniques. Two stages have been included. In the first stage, global thresholding is applied on the entire image and two different thresholds are determined from which the most of image pixels are classified into foreground or background. In the second stage, the remaining pixels are assigned to foreground or background classes based on local analysis. In this stage, several local thresholding methods are combined and the final binary value of each remaining pixel is chosen as the most probable one. The proposed technique has been tested on a large collection of standard and synthetic documents and compared with well-known methods using standard measures and was shown to be more powerful

    Cross-Layer Framework for Multiuser Real Time H.264/AVC Video Encoding and Transmission over Block Fading MIMO Channels Using Outage Probability

    Get PDF
    We present a framework for cross-layer optimized real time multiuser encoding of video using a single layer H.264/AVC and transmission over MIMO wireless channels. In the proposed cross-layer adaptation, the channel of every user is characterized by the probability density function of its channel mutual information and the performance of the H.264/AVC encoder is modeled by a rate distortion model that takes into account the channel errors. These models are used during the resource allocation of the available slots in a TDMA MIMO communication system with capacity achieving channel codes. This framework allows for adaptation to the statistics of the wireless channel and to the available resources in the system and utilization of the multiuser diversity of the transmitted video sequences. We show the effectiveness of the proposed framework for video transmission over Rayleigh MIMO block fading channels, when channel distribution information is available at the transmitter

    A New One-Dimensional Chaotic Map and Its Use in a Novel Real-Time Image Encryption Scheme

    Get PDF
    We present a new one-dimensional chaotic map, suitable for real-time image encryption. Its theoretical analysis, performed using some specific tools from the chaos theory, shows that the proposed map has a chaotic regime and proves its ergodicity, for a large space of values of the control parameter. In addition, to argue for the good cryptographic properties of the proposed map, we have tested the randomness of the values generated by its orbit using NIST statistical suite. Moreover, we present a new image encryption scheme with a classic bimodular architecture, in which the confusion and the diffusion are assured by means of two maps of the previously proposed type. The very good cryptographic performances of the proposed scheme are proved by an extensive analysis, which was performed regarding the latest methodology in this field

    No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Get PDF
    Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality
    corecore