248 research outputs found

    A Multi-Transformation Evolutionary Framework for Influence Maximization in Social Networks

    Full text link
    Influence maximization is a crucial issue for mining the deep information of social networks, which aims to select a seed set from the network to maximize the number of influenced nodes. To evaluate the influence spread of a seed set efficiently, existing studies have proposed transformations with lower computational costs to replace the expensive Monte Carlo simulation process. These alternate transformations, based on network prior knowledge, induce different search behaviors with similar characteristics to various perspectives. Specifically, it is difficult for users to determine a suitable transformation a priori. This article proposes a multi-transformation evolutionary framework for influence maximization (MTEFIM) with convergence guarantees to exploit the potential similarities and unique advantages of alternate transformations and to avoid users manually determining the most suitable one. In MTEFIM, multiple transformations are optimized simultaneously as multiple tasks. Each transformation is assigned an evolutionary solver. Three major components of MTEFIM are conducted via: 1) estimating the potential relationship across transformations based on the degree of overlap across individuals of different populations, 2) transferring individuals across populations adaptively according to the inter-transformation relationship, and 3) selecting the final output seed set containing all the transformation's knowledge. The effectiveness of MTEFIM is validated on both benchmarks and real-world social networks. The experimental results show that MTEFIM can efficiently utilize the potentially transferable knowledge across multiple transformations to achieve highly competitive performance compared to several popular IM-specific methods. The implementation of MTEFIM can be accessed at https://github.com/xiaofangxd/MTEFIM.Comment: This work has been submitted to the IEEE Computational Intelligence Magazine for publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Lancaster Stem Sammon Projective Feature Selection based Stochastic eXtreme Gradient Boost Clustering for Web Page Ranking

    Get PDF
    Web content mining retrieves the information from web in more structured forms. The page rank plays an essential part in web content mining process. Whenever user searches for any information on web, the relevant information is shown at top of list through page ranking. Many existing page ranking algorithms were developed and failed to rank the web pages in accurate manner through minimum time feeding. In direction to address the above mentioned issues, Lancaster Stem Sammon Projective Feature Selection based Stochastic eXtreme Gradient Boost Clustering (LSSPFS-SXGBC) Approach is introduced for page ranking based on user query. LSSPFS-SXGBC Approach has three processes for performing efficient web page ranking, namely preprocessing, feature selection and clustering. LSSPFS-SXGBC Approach in account of the numeral of operator request by way of an input. Lancaster Stemming Preprocessed Analysis is carried out in LSSPFS-SXGBC Approach for removing the noisy data from the input query. It eradicates the stem words, stop words and incomplete data for minimizing the time and space consumption. Sammon Projective Feature Selection Process is carried out in LSSPFS-SXGBC Approach to select the relevant features (i.e., keywords) based on user needs for efficient page ranking. Sammon Projection maps the high-dimensional space to lower dimensionality space to preserve the inter-point distance structure. After feature selection, Stochastic eXtreme Gradient Boost Page Rank Clustering process is carried out to cluster the similar keyword web pages based on their rank. Gradient Boost Page Rank Cluster is an ensemble of several weak clusters (i.e., X-means cluster). X-means cluster partitions the web pages into ‘x’ numeral of clusters where each reflection goes towards the cluster through adjacent mean value. For every weak cluster, selected features are considered as the training samples. Subsequently, all weak clusters are joined to form the strong cluster for attaining the webpage ranking results. By this way, an efficient page ranking is carried out through higher accurateness and minimum time consumption. The practical validation is carried out in LSSPFS-SXGBC Approach on factors such ranking accurateness, false positive rate, ranking time and space complexity with respect to numeral of user query

    Living analytics methods for the social web

    Get PDF
    [no abstract

    A WOA-based optimization approach for task scheduling in cloud Computing systems

    Get PDF
    Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this work, for the first time, we apply the latest metaheuristics WOA (the whale optimization algorithm) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called IWC (Improved WOA for Cloud task scheduling) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks

    An Efficient Web Usage Mining Approach Using Chaos Optimization and Particle Swarm Optimization Algorithm Based on Optimal Feedback Model

    Get PDF
    The dynamic nature of information resources as well as the continuous changes in the information demands of the users has made it very difficult to provide effective methods for data mining and document ranking. This paper proposes an efficient particle swarm chaos optimization mining algorithm based on chaos optimization and particle swarm optimization by using feedback model of user to provide a listing of best-matching webpages for user. The proposed algorithm starts with an initial population of many particles moving around in a D-dimensional search space where each particle vector corresponds to a potential solution of the underlying problem, which is formed by subsets of webpages. Experimental results show that our approach significantly outperforms other algorithms in the aspects of response time, execution time, precision, and recall

    An ant-colony based approach for real-time implicit collaborative information seeking

    Get PDF
    This document is an Accepted Manuscript of the following article: Alessio Malizia, Kai A. Olsen, Tommaso Turchi, and Pierluigi Crescenzi, ‘An ant-colony based approach for real-time implicit collaborative information seeking’, Information Processing & Management, Vol. 53 (3): 608-623, May 2017. Under embargo until 31 July 2018. The final, definitive version of this paper is available online at doi: https://doi.org/10.1016/j.ipm.2016.12.005, published by Elsevier Ltd.We propose an approach based on Swarm Intelligence — more specifically on Ant Colony Optimization (ACO) — to improve search engines’ performance and reduce information overload by exploiting collective users’ behavior. We designed and developed three different algorithms that employ an ACO-inspired strategy to provide implicit collaborative-seeking features in real time to search engines. The three different algorithms — NaïveRank, RandomRank, and SessionRank — leverage on different principles of ACO in order to exploit users’ interactions and provide them with more relevant results. We designed an evaluation experiment employing two widely used standard datasets of query-click logs issued to two major Web search engines. The results demonstrated how each algorithm is suitable to be employed in ranking results of different types of queries depending on users’ intent.Peer reviewedFinal Accepted Versio

    Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function

    Get PDF
    Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data
    • 

    corecore