15,416 research outputs found

    Exact Computation of Influence Spread by Binary Decision Diagrams

    Full text link
    Evaluating influence spread in social networks is a fundamental procedure to estimate the word-of-mouth effect in viral marketing. There are enormous studies about this topic; however, under the standard stochastic cascade models, the exact computation of influence spread is known to be #P-hard. Thus, the existing studies have used Monte-Carlo simulation-based approximations to avoid exact computation. We propose the first algorithm to compute influence spread exactly under the independent cascade model. The algorithm first constructs binary decision diagrams (BDDs) for all possible realizations of influence spread, then computes influence spread by dynamic programming on the constructed BDDs. To construct the BDDs efficiently, we designed a new frontier-based search-type procedure. The constructed BDDs can also be used to solve other influence-spread related problems, such as random sampling without rejection, conditional influence spread evaluation, dynamic probability update, and gradient computation for probability optimization problems. We conducted computational experiments to evaluate the proposed algorithm. The algorithm successfully computed influence spread on real-world networks with a hundred edges in a reasonable time, which is quite impossible by the naive algorithm. We also conducted an experiment to evaluate the accuracy of the Monte-Carlo simulation-based approximation by comparing exact influence spread obtained by the proposed algorithm.Comment: WWW'1

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilité des réseaux est un problème combinatoire très complexe qui nécessite des moyens de calcul très puissants. Plusieurs méthodes ont été proposées dans la littérature pour apporter des solutions. Certaines ont été programmées dont notamment les méthodes d’énumération des ensembles minimaux et la factorisation, et d’autres sont restées à l’état de simples théories. Cette thèse traite le cas de l’évaluation et l’optimisation de la fiabilité des réseaux. Plusieurs problèmes ont été abordés dont notamment la mise au point d’une méthodologie pour la modélisation des réseaux en vue de l’évaluation de leur fiabilités. Cette méthodologie a été validée dans le cadre d’un réseau de radio communication étendu implanté récemment pour couvrir les besoins de toute la province québécoise. Plusieurs algorithmes ont aussi été établis pour générer les chemins et les coupes minimales pour un réseau donné. La génération des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilité. Ces algorithmes ont permis de traiter de manière rapide et efficace plusieurs réseaux tests ainsi que le réseau de radio communication provincial. Ils ont été par la suite exploités pour évaluer la fiabilité grâce à une méthode basée sur les diagrammes de décision binaire. Plusieurs contributions théoriques ont aussi permis de mettre en place une solution exacte de la fiabilité des réseaux stochastiques imparfaits dans le cadre des méthodes de factorisation. A partir de cette recherche plusieurs outils ont été programmés pour évaluer et optimiser la fiabilité des réseaux. Les résultats obtenus montrent clairement un gain significatif en temps d’exécution et en espace de mémoire utilisé par rapport à beaucoup d’autres implémentations. Mots-clés: Fiabilité, réseaux, optimisation, diagrammes de décision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systèmes de radio télécommunication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    Getting the Most Out of Your Data: Multitask Bayesian Network Structure Learning, Predicting Good Probabilities and Ensemble Selection

    Full text link
    First, I consider the problem of simultaneously learning the structures of multiple Bayesian networks from multiple related datasets. I present a multitask Bayes net structure learning algorithm that is able to learn more accurate network structures by transferring useful information between the datasets. The algorithm extends the score and search techniques used in traditional structure learning to the multitask case by defining a scoring function for sets of structures (one structure for each task) and an efficient procedure for searching for a high scoring set of structures. I also address the task selection problem in the context of multitask Bayes net structure learning. Unlike in other multitask learning scenarios, in the Bayes net structure learning setting there is a clear definition of task relatedness: two tasks are related if they have similar structures. This allows one to automatically select a set of related tasks to be used by multitask structure learning. Second, I examine the relationship between the predictions made by different supervised learning algorithms and true posterior probabilities. I show that quasi-maximum margin methods such as boosted decision trees and SVMs push probability mass away from 0 and 1 yielding a characteristic sigmoid shaped distortion in the predicted probabilities. Naive Bayes pushes probabilities toward 0 and 1. Other models such as neural nets, logistic regression and bagged trees usually do not have these biases and predict well calibrated probabilities. I experiment with two ways of correcting the biased probabilities predicted by some learning methods: Platt Scaling and Isotonic Regression. I qualitatively examine what distortions these calibration methods are suitable for and quantitatively examine how much data they need to be effective. Third, I present a method for constructing ensembles from libraries of thousands of models. Model libraries are generated using different learning algorithms and parameter settings. Forward stepwise selection is used to add to the ensemble the models that maximize its performance. The main drawback of ensemble selection is that it builds models that are very large and slow at test time. This drawback, however, can be overcome with little or no loss in performance by using model compression.The work in this dissertation was supported by NSF grants 0347318, 0412930, 0427914, and 0612031

    Social media as a data gathering tool for international business qualitative research: opportunities and challenges

    Full text link
    Lusophone African (LA) multinational enterprises (MNEs) are becoming a significant pan-African and global economic force regarding their international presence and influence. However, given the extreme poverty and lack of development in their home markets, many LA enterprises seeking to internationalize lack resources and legitimacy in international markets. Compared to higher income emerging markets, Lusophone enterprises in Africa face more significant challenges in their internationalization efforts. Concomitantly, conducting significant international business (IB) research in these markets to understand these MNEs internationalization strategies can be a very daunting task. The fast-growing rise of social media on the Internet, however, provides an opportunity for IB researchers to examine new phenomena in these markets in innovative ways. Unfortunately, for various reasons, qualitative researchers in IB have not fully embraced this opportunity. This article studies the use of social media in qualitative research in the field of IB. It offers an illustrative case based on qualitative research on internationalization modes of LAMNEs conducted by the authors in Angola and Mozambique using social media to identify and qualify the population sample, as well as interact with subjects and collect data. It discusses some of the challenges of using social media in those regions of Africa and suggests how scholars can design their studies to capitalize on social media and corresponding data as a tool for qualitative research. This article underscores the potential opportunities and challenges inherent in the use of social media in IB-oriented qualitative research, providing recommendations on how qualitative IB researchers can design their studies to capitalize on data generated by social media.https://doi.org/10.1080/15475778.2019.1634406https://doi.org/10.1080/15475778.2019.1634406https://doi.org/10.1080/15475778.2019.1634406https://doi.org/10.1080/15475778.2019.1634406Accepted manuscriptPublished versio

    Heuristic container placement algorithms

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2003Includes bibliographical references (leaves: 56-58)Text in English; Abstract: Turkish and Englishviii, 72 leavesWith the growth of transportation over sea; defining transportation processes in a better way and finding ways to make transportation processes more effective have become one of the most important research areas of today. Especially in the last quartet of the previous decade, the computers had become much powerful tools with their impressive amount of data processing cababilites. It was imminent that computers had begun taking serious roles in the system development studies. As a result; constructing models for the processes in container terminals and processing the data with the computers create opportunities for the automation of various processes in container terminals. The final step of these studies is the full automation of terminal activities with software packages that combine various functions focused on various processes in a single system.This study is about a project that had been made for a container terminal owned by a special company. During this study; there had been discussions with experts about the subject, and container handling processes in the terminal had been analyzed in order to define the main structure of the yard management software to be created.This study focuses on the container handling activities over the yard space so as to create a basis for a computer system that will take part in the decisions during the container operations. Object oriented analysis and design methods are used for the definition of the system that will help the decisions in the yard operations. The optimization methodology that will be the core of the container placement decisions is based on using different placement patterns and placement algorithms for different conditions. These placement patterns and algorithms are constructed due to the container handling machinery that was being used in the terminal that this study has been made for
    • …
    corecore