421,473 research outputs found

    Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities

    Get PDF
    One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.Comment: PhD thesis, Mar 201

    Artificial Intellignce: Art or Science?

    Get PDF
    Computer programs are new kinds of machines with great potential for improving the quality of life. In particular, expert systems could improve the ability of the small, weak and poor members of society to access the information they need to solve their problems. However, like most areas of computing, expert systems design is currently practiced as an art. In order to realise its potential it must also become an engineering science: providing the kinds of assurances of reliability that are normal in other branches of engineering. The way to do this is to put the techniques used to build expert systems and other artificial intelligence programs onto a sound theoretical foundation. The tools of mathematical logic appear to be a good basis for doing this, but we need to be imaginative in their use-not restricting ourselves to the kind of deductive reasoning usually thought of as 'logical', but investigating other aspects of reasoning, including uncertain reasoning, making conjectures and the guidance of inference. Acknow ledgement

    WISER: A Semantic Approach for Expert Finding in Academia based on Entity Linking

    Full text link
    We present WISER, a new semantic search engine for expert finding in academia. Our system is unsupervised and it jointly combines classical language modeling techniques, based on text evidences, with the Wikipedia Knowledge Graph, via entity linking. WISER indexes each academic author through a novel profiling technique which models her expertise with a small, labeled and weighted graph drawn from Wikipedia. Nodes in this graph are the Wikipedia entities mentioned in the author's publications, whereas the weighted edges express the semantic relatedness among these entities computed via textual and graph-based relatedness functions. Every node is also labeled with a relevance score which models the pertinence of the corresponding entity to author's expertise, and is computed by means of a proper random-walk calculation over that graph; and with a latent vector representation which is learned via entity and other kinds of structural embeddings derived from Wikipedia. At query time, experts are retrieved by combining classic document-centric approaches, which exploit the occurrences of query terms in the author's documents, with a novel set of profile-centric scoring strategies, which compute the semantic relatedness between the author's expertise and the query topic via the above graph-based profiles. The effectiveness of our system is established over a large-scale experimental test on a standard dataset for this task. We show that WISER achieves better performance than all the other competitors, thus proving the effectiveness of modelling author's profile via our "semantic" graph of entities. Finally, we comment on the use of WISER for indexing and profiling the whole research community within the University of Pisa, and its application to technology transfer in our University

    The construction of global management consulting - a study of consultancies’ web presentations

    Get PDF
    Management consulting increasingly appears as a global endeavour as reflected in the increasing dominance of a few large, global management-consulting firms. However, features of the consulting service (e.g. its immaterial and interactional character) as well as aspects of management (e.g. its cultural anchoredness) highlight the locality of management consulting. In this paper we approach this tension between the global and the local by seeing consulting as involving the creation of generalised myths. More specifically, we ask the question: How do global consulting companies construct the viability and desirability of their services? Based on a view of management consultants as mythmakers, we study the argumentation on corporate web sites of four leading global consultancies in five different countries. Applying a framework based on the sociology of translation, we analyze the translation strategies used in making the service of global consultancies both viable and indispensable. We find that the need for consultants is to a large extent constructed through defining management as an expert activity, thus creating a need for external advisors possessing globally applicable expert knowledge. In this effort, the consultants ally with three widely spread rationalized managerial myths – the rationality myth, the globalization myth and the universality myth. We conclude, that global consulting firms are actively involved in creating and reinforcing the very same institutions, which are the prerequisites for their future success.management consulting; globalization; myth making

    Towards Automating the Construction & Maintenance of Attack Trees: a Feasibility Study

    Full text link
    Security risk management can be applied on well-defined or existing systems; in this case, the objective is to identify existing vulnerabilities, assess the risks and provide for the adequate countermeasures. Security risk management can also be applied very early in the system's development life-cycle, when its architecture is still poorly defined; in this case, the objective is to positively influence the design work so as to produce a secure architecture from the start. The latter work is made difficult by the uncertainties on the architecture and the multiple round-trips required to keep the risk assessment study and the system architecture aligned. This is particularly true for very large projects running over many years. This paper addresses the issues raised by those risk assessment studies performed early in the system's development life-cycle. Based on industrial experience, it asserts that attack trees can help solve the human cognitive scalability issue related to securing those large, continuously-changing system-designs. However, big attack trees are difficult to build, and even more difficult to maintain. This paper therefore proposes a systematic approach to automate the construction and maintenance of such big attack trees, based on the system's operational and logical architectures, the system's traditional risk assessment study and a security knowledge database.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    BCAUS Project description and consideration of separation of data and control

    Get PDF
    The commonly stated truths that data may be segregated from program control in generic expert system shells and that such tools support straightforward knowledge representation were examined. The ideal of separation of data from program control in expert systems is difficult to realize for a variety of reasons. One approach to achieving this goal is to integrate hybrid collections of specialized shells and tools instead of producing custom systems built with a single all purpose expert system tool. Aspects of these issues are examined in the context of a specific diagnostic expert system application, the Backup Control Mode Analysis and Utility System (BCAUS), being developed for the Gamma Ray Observatory (GRO) spacecraft. The project and the knowledge gained in working on the project are described
    • 

    corecore