34 research outputs found

    Target Tracking Based on Virtual Grid in Wireless Sensor Networks

    Get PDF
    One of the most important and typical application of wireless sensor networks (WSNs) is target tracking. Although target tracking, can provide benefits for large-scale WSNs and organize them into clusters but tracking a moving target in cluster-based WSNs suffers a boundary problem. The main goal of this paper was to introduce an efficient and novel mobility management protocol namely Target Tracking Based on Virtual Grid (TTBVG), which integrates on-demand dynamic clustering into a cluster- based WSN for target tracking. This protocol converts on-demand dynamic clusters to scalable cluster-based WSNs, by using boundary nodes and facilitates sensors’ collaboration around clusters. In this manner, each sensor node has the probability of becoming a cluster head and apperceives the tradeoff between energy consumption and local sensor collaboration in cluster-based sensor networks. The simulation results of this study demonstrated that the efficiency of the proposed protocol in both one-hop and multi-hop cluster-based sensor networks

    Data sanitization in association rule mining based on impact factor

    Get PDF
    Data sanitization is a process that is used to promote the sharing of transactional databases among organizations and businesses, it alleviates concerns for individuals and organizations regarding the disclosure of sensitive patterns. It transforms the source database into a released database so that counterparts cannot discover the sensitive patterns and so data confidentiality is preserved against association rule mining method. This process strongly rely on the minimizing the impact of data sanitization on the data utility by minimizing the number of lost patterns in the form of non-sensitive patterns which are not mined from sanitized database. This study proposes a data sanitization algorithm to hide sensitive patterns in the form of frequent itemsets from the database while controls the impact of sanitization on the data utility using estimation of impact factor of each modification on non-sensitive itemsets. The proposed algorithm has been compared with Sliding Window size Algorithm (SWA) and Max-Min1 in term of execution time, data utility and data accuracy. The data accuracy is defined as the ratio of deleted items to the total support values of sensitive itemsets in the source dataset. Experimental results demonstrate that proposed algorithm outperforms SWA and Max-Min1 in terms of maximizing the data utility and data accuracy and it provides better execution time over SWA and Max-Min1 in high scalability for sensitive itemsets and transactions

    Avoiding conversion and rearrangement overhead in SIMD architecures

    No full text
    In this dissertation, a novel SIMD extension called Modified MMX (MMMX) for multimedia computing is presented. Specifically, the MMX architecture is enhanced with the extended subwords and the matrix register file techniques. The extended subwords technique uses SIMD registers that are wider than the packed format used to store the data. The extended subwords technique avoids data type conversion overhead and increases parallelism in SIMD architectures. This is because promoting the subwords of the source SIMD registers to larger subwords before they can be processed and demoting the results again before they can be written back to memory incurs conversion overhead. The matrix register file technique allows to load data that is stored consecutively in memory into a column of the register file, where a column corresponds to the corresponding subwords of different registers. In other words, this technique provides both row-wise as well as column-wise accesses to the media register file. It is a useful approach for matrix operations that are common in multimedia processing. In addition, in this work, new and general SIMD instructions addressing the multimedia application domain are investigated. It does not consider an ISA that is application specific. For example, special-purpose instructions are synthesized using a few general-purpose SIMD instructions. The performance of the MMMX architecture is compared to the performance of the MMX/SSE architecture for different multimedia applications and kernels using the sim-outorder simulator of the SimpleScalar toolset. Additionally, three issues related to the efficient implementation of the 2D Discrete Wavelet Transform (DWT)on general-purpose processors, in particular the Pentium 4, are discussed. These are 64K aliasing, cache conflict misses, and SIMD vectorization. 64K aliasing is a phenomenon that happens on the Pentium 4, which can degrade performance by an order of magnitude. It occurs if two or more data items whose addresses differ by a multiple of 64K need to be cached simultaneously. There are also many cache conflict misses in the implementation of vertical filtering of the DWT, if the filter length exceeds the number of cache ways. In this dissertation, techniques are proposed to avoid 64K aliasing and to mitigate cache conflict misses. Furthermore, the performance of the 2D DWT is improved by exploiting the data-level parallelism using the SIMD instructions supported by most general-purpose processors.Electrical Engineering, Mathematics and Computer Scienc

    A survey of evolutionary computation for association rule mining

    Full text link
    © 2020 Association Rule Mining (ARM) is a significant task for discovering frequent patterns in data mining. It has achieved great success in a plethora of applications such as market basket, computer networks, recommendation systems, and healthcare. In the past few years, evolutionary computation-based ARM has emerged as one of the most popular research areas for addressing the high computation time of traditional ARM. Although numerous papers have been published, there is no comprehensive analysis of existing evolutionary ARM methodologies. In this paper, we review emerging research of evolutionary computation for ARM. We discuss the applications on evolutionary computations for different types of ARM approaches including numerical rules, fuzzy rules, high-utility itemsets, class association rules, and rare association rules. Evolutionary ARM algorithms were classified into four main groups in terms of the evolutionary approach, including evolution-based, swarm intelligence-based, physics-inspired, and hybrid approaches. Furthermore, we discuss the remaining challenges of evolutionary ARM and discuss its applications and future topics

    Privacy-preserving in association rule mining using an improved discrete binary artificial bee colony

    Full text link
    © 2019 Association Rule Hiding (ARH) is the process of protecting sensitive knowledge using data transformation. Although there are some evolutionary-based ARH algorithms, they mostly focus on the itemset hiding instead of the rule hiding. Besides, unstable convergence to the global optimum solution and designing long solutions make them inappropriate in reducing side effects. They use the basic versions of evolutionary approaches, resulting in inappropriate performance in ARH domain where the search space is large and the algorithms easily get trapped in local optima. To deal with these problems, we propose a new rule hiding algorithm based on a binary Artificial Bee Colony (ABC) approach which has good exploration. However, we improve the binary ABC algorithm to enhance its poor exploitation by designing a new neighborhood generation mechanism to balance between exploration and exploitation. We called this algorithm Improved Binary ABC (IBABC). IBABC approach is coupled with our proposed rule hiding algorithm, called ABC4ARH, to select sensitive transactions for modification. To choose victim items, ABC4ARH formulates a heuristic. The performance of ABC4ARH algorithm on the side effects is demonstrated using extensive experiments conducted on five real datasets. Furthermore, the effectiveness of IBABC is verified using the uncapacitated facility location problem and 0–1 knapsack problem

    Parallel implementation of Gray Level Co-occurrence Matrices and Haralick texture features on cell architecture

    No full text
    Texture features extraction algorithms are key functions in various image processing applications such as medical images, remote sensing, and content-based image retrieval. The most common way to extract texture features is the use of Gray Level Co-occurrence Matrices (GLCMs). The GLCM contains the second-order statistical information of spatial relationship of the pixels of an image. Haralick texture features are extracted using these GLCMs. However, the GLCMs and Haralick texture features extraction algorithms are computationally intensive. In this paper, we apply different parallel techniques such as task- and data-level parallelism to exploit available parallelism of those applications on the Cell multi-core processor. Experimental results have shown that our parallel implementations using 16 Synergistic Processor Elements significantly reduce the computational times of the GLCMs and texture features extraction algorithms by a factor of 10× over non-parallel optimized implementations for different image sizes from 128×128 to 1024×1024.Software Computer TechnologyElectrical Engineering, Mathematics and Computer Scienc

    Implementing the 2-D Wavelet Transform on SIMD-Enhanced General-Purpose Processors

    No full text
    The 2-D Discrete Wavelet Transform (DWT) consumes up to 68% of the JPEG2000 encoding time. In this paper, we develop efficient implementations of this important kernel on general-purpose processors (GPPs), in particular the Pentium 4 (P4). Efficient implementations of the 2-D DWT on the P4 must address three issues. First, the P4 suffers from a problem known as 64K aliasing, which can degrade performance by an order of magnitude. We propose two techniques to avoid 64K aliasing which improve performance by a factor of up to 4.20. Second, a straightforward implementation of vertical filtering incurs many cache misses. Cache performance can be improved by applying loop interchange, but there will still be many conflict misses if the filter length exceeds the cache associativity. Two methods are proposed to reduce the number of conflict misses which provide an additional performance improvement of up to 1.24. To show that these methods are general, results for the P3 and Opteron are also provided. Third, efficient implementations of the 2-D DWT must exploit the SIMD instructions supported by most GPPs, including the P4, and we present MMX and SSE implementations of horizontal and vertical filtering which provide a maximum speedup of 3.39 and 6.72, respectively.Microelectronics & Computer EngineeringElectrical Engineering, Mathematics and Computer Scienc

    Threads Pipelining on the CellBE Systems

    No full text
    corecore