12 research outputs found

    Symbiotic Organisms Search Optimization to Predict Optimal Thread Count for Multi-threaded Applications

    Get PDF
    Multicore systems have emerged as a cost-effective option for the growing demands for high-performance, low-energy computing. Thread management has long been a source of concern for developers, as overheads associated with it reduce the overall throughput of the multicore processor systems. One of the most complex problems with multicore processors is determining the optimal number of threads for the execution of multithreaded programs. To address this issue, this paper proposes a novel solution based on a modified symbiotic organism search (MSOS) algorithm which is a bio-inspired algorithm used for optimization in various engineering domains. This technique uses mutualism, commensalism and parasitism behaviours seen in organisms for searching the optimal solutions in the available search space. The algorithm is simulated on the NVIDIA DGX Intel-Xeon E5-2698-v4 server with PARSEC 3.0 benchmark suit.  The results show that keeping the thread count equal to the number of processors available in the system is not necessarily the best strategy to get maximum speedup when running multithreaded programs. It was also observed that when programs are run with the optimal thread count, the execution time is substantially decreased, resulting in energy savings due to the use of fewer processors than are available in the system

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Maximum-Entropy-Model-Enabled Complexity Reduction Algorithm in Modern Video Coding Standards

    Get PDF
    Symmetry considerations play a key role in modern science, and any differentiable symmetry of the action of a physical system has a corresponding conservation law. Symmetry may be regarded as reduction of Entropy. This work focuses on reducing the computational complexity of modern video coding standards by using the maximum entropy principle. The high computational complexity of the coding unit (CU) size decision in modern video coding standards is a critical challenge for real-time applications. This problem is solved in a novel approach considering CU termination, skip, and normal decisions as three-class making problems. The maximum entropy model (MEM) is formulated to the CU size decision problem, which can optimize the conditional entropy; the improved iterative scaling (IIS) algorithm is used to solve this optimization problem. The classification features consist of the spatio-temporal information of the CU, including the rate–distortion (RD) cost, coded block flag (CBF), and depth. For the case analysis, the proposed method is based on High Efficiency Video Coding (H.265/HEVC) standards. The experimental results demonstrate that the proposed method can reduce the computational complexity of the H.265/HEVC encoder significantly. Compared with the H.265/HEVC reference model, the proposed method can reduce the average encoding time by 53.27% and 56.36% under low delay and random access configurations, while Bjontegaard Delta Bit Rates (BD-BRs) are 0.72% and 0.93% on average

    Recursive Non-Local Means Filter for Video Denoising

    Get PDF
    In this paper, we propose a computationally efficient algorithm for video denoising that exploits temporal and spatial redundancy. The proposed method is based on non-local means (NLM). NLM methods have been applied successfully in various image denoising applications. In the single-frame NLM method, each output pixel is formed as a weighted sum of the center pixels of neighboring patches, within a given search window. The weights are based on the patch intensity vector distances. The process requires computing vector distances for all of the patches in the search window. Direct extension of this method from 2D to 3D, for video processing, can be computationally demanding. Note that the size of a 3D search window is the size of the 2D search window multiplied by the number of frames being used to form the output. Exploiting a large number of frames in this manner can be prohibitive for real-time video processing. Here, we propose a novel recursive NLM (RNLM) algorithm for video processing. Our RNLM method takes advantage of recursion for computational savings, compared with the direct 3D NLM. However, like the 3D NLM, our method is still able to exploit both spatial and temporal redundancy for improved performance, compared with 2D NLM. In our approach, the first frame is processed with single-frame NLM. Subsequent frames are estimated using a weighted sum of pixels from the current frame and a pixel from the previous frame estimate. Only the single best matching patch from the previous estimate is incorporated into the current estimate. Several experimental results are presented here to demonstrate the efficacy of our proposed method in terms of quantitative and subjective image quality

    On the Complex Network Structure of Musical Pieces: Analysis of Some Use Cases from Different Music Genres

    Full text link
    This paper focuses on the modeling of musical melodies as networks. Notes of a melody can be treated as nodes of a network. Connections are created whenever notes are played in sequence. We analyze some main tracks coming from different music genres, with melodies played using different musical instruments. We find out that the considered networks are, in general, scale free networks and exhibit the small world property. We measure the main metrics and assess whether these networks can be considered as formed by sub-communities. Outcomes confirm that peculiar features of the tracks can be extracted from this analysis methodology. This approach can have an impact in several multimedia applications such as music didactics, multimedia entertainment, and digital music generation.Comment: accepted to Multimedia Tools and Applications, Springe

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    A new image size reduction model for an efficient visual sensor network

    Get PDF
    Image size reduction for energy-efficient transmission without losing quality is critical in Visual Sensor Networks (VSNs). The proposed method finds overlapping regions using camera locations, which eliminate unfocussed regions from the input images. The sharpness for the overlapped regions is estimated to find the Dominant Overlapping Region (DOR). The proposed model partitions further the DOR into sub-DORs according to capacity of the cameras. To reduce noise effects from the sub-DOR, we propose to perform a Median operation, which results in a Compressed Significant Region (CSR). For non-DOR, we obtain Sobel edges, which reduces the size of the images down to ambinary form. The CSR and Sobel edges of the non-DORs are sent by a VSN. Experimental results and a comparative study with the state-of-the-art methods shows that the proposed model outperforms the existing methods in terms of quality, energy consumption and network lifetime

    Modeling and predicting the popularity of online news based on temporal and content-related features

    Get PDF
    As the market of globally available online news is large and still growing, there is a strong competition between online publishers in order to reach the largest possible audience. Therefore an intelligent online publishing strategy is of the highest importance to publishers. A prerequisite for being able to optimize any online strategy, is to have trustworthy predictions of how popular new online content may become. This paper presents a novel methodology to model and predict the popularity of online news. We first introduce a new strategy and mathematical model to capture view patterns of online news. After a thorough analysis of such view patterns, we show that well-chosen base functions lead to suitable models, and show how the influence of day versus night on the total view patterns can be taken into account to further increase the accuracy, without leading to more complex models. Second, we turn to the prediction of future popularity, given recently published content. By means of a new real-world dataset, we show that the combination of features related to content, meta-data, and the temporal behavior leads to significantly improved predictions, compared to existing approaches which only consider features based on the historical popularity of the considered articles. Whereas traditionally linear regression is used for the application under study, we show that the more expressive gradient tree boosting method proves beneficial for predicting news popularity
    corecore