36 research outputs found

    Random Surfing Without Teleportation

    Full text link
    In the standard Random Surfer Model, the teleportation matrix is necessary to ensure that the final PageRank vector is well-defined. The introduction of this matrix, however, results in serious problems and imposes fundamental limitations to the quality of the ranking vectors. In this work, building on the recently proposed NCDawareRank framework, we exploit the decomposition of the underlying space into blocks, and we derive easy to check necessary and sufficient conditions for random surfing without teleportation.Comment: 13 pages. Published in the Volume: "Algorithms, Probability, Networks and Games, Springer-Verlag, 2015". (The updated version corrects small typos/errors

    ReSpecTX: Programming Interaction Made Easy

    Get PDF
    In this paper we present the ReSpecTX language, toolchain, and standard library as a first step of a path aimed at closing the gap between coordination languages \u2013 mostly a prerogative of the academic realm until now \u2013 and their industrial counterparts. Since the limited adoption of coordination languages within the industrial realm is also due to the lack of suitable toolchains and libraries of reusable mechanisms, ReSpecTX equips a core coordination language (ReSpecT) with tools and features commonly found in mainstream programming languages. In particular, ReSpecTX makes it possible to provide a reference library of reusable and composable interaction patterns

    Argumentation for Knowledge Representation, Conflict Resolution, Defeasible Inference and Its Integration with Machine Learning

    Get PDF
    Modern machine Learning is devoted to the construction of algorithms and computational procedures that can automatically improve with experience and learn from data. Defeasible argumentation has emerged as sub-topic of artificial intelligence aimed at formalising common-sense qualitative reasoning. The former is an inductive approach for inference while the latter is deductive, each one having advantages and limitations. A great challenge for theoretical and applied research in AI is their integration. The first aim of this chapter is to provide readers informally with the basic notions of defeasible and non-monotonic reasoning. It then describes argumentation theory, a paradigm for implementing defeasible reasoning in practice as well as the common multi-layer schema upon which argument-based systems are usually built. The second aim is to describe a selection of argument-based applications in the medical and health-care sectors, informed by the multi-layer schema. A summary of the features that emerge from the applications under review is aimed at showing why defeasible argumentation is attractive for knowledge-representation, conflict resolution and inference under uncertainty. Open problems and challenges in the field of argumentation are subsequently described followed by a future outlook in which three points of integration with machine learning are proposed

    Visual Overlay on OpenStreetMap Data to Support Spatial Exploration of Urban Environments

    Get PDF
    Increasing volumes of spatial data about urban areas are captured and made available via volunteered geographic information (VGI) sources, such as OpenStreetMap (OSM). Hence, new opportunities arise for regional exploration that can lead to improvements in the lives of citizens through spatial decision support. We believe that the VGI data of the urban environment could be used to present a constructive overview of the regional infrastructure with the advent of web technologies. Current location-based services provide general map-based information for the end users with conventional local search functionality, and hence, the presentation of the rich urban information is limited. In this work, we analyze the OSM data to classify the geo entities into consequential categories with facilities, landscape and land use distribution. We employ a visual overlay of heat map and interactive visualizations to present the regional characterization on OSM data classification. In the proposed interface, users are allowed to express a variety of spatial queries to exemplify their geographic interests. They can compare the characterization of urban areas with respect to multiple spatial dimensions of interest and can search for the most suitable region. The search experience is further enhanced via efficient optimization and interaction methods to support the decision making of end users. We report the end user acceptability and efficiency of the proposed system via usability studies and performance analysis comparison. Document type: Articl

    Artificial Intelligence, Social Media and Supply Chain Management: The Way Forward

    Get PDF
    Supply chain management (SCM) is a complex network of multiple entities ranging from business partners to end consumers. These stakeholders frequently use social media platforms, such as Twitter and Facebook, to voice their opinions and concerns. AI-based applications, such as sentiment analysis, allow us to extract relevant information from these deliberations. We argue that the context-specific application of AI, compared to generic approaches, is more efficient in retrieving meaningful insights from social media data for SCM. We present a conceptual overview of prevalent techniques and available resources for information extraction. Subsequently, we have identified specific areas of SCM where context-aware sentiment analysis can enhance the overall efficiency

    Moving towards the semantic web: enabling new technologies through the semantic annotation of social contents.

    Get PDF
    La Web Social ha causat un creixement exponencial dels continguts disponibles deixant enormes quantitats de recursos textuals electrònics que sovint aclaparen els usuaris. Aquest volum d’informació és d’interès per a la comunitat de mineria de dades. Els algorismes de mineria de dades exploten característiques de les entitats per tal de categoritzar-les, agrupar-les o classificar-les segons la seva semblança. Les dades per si mateixes no aporten cap mena de significat: han de ser interpretades per esdevenir informació. Els mètodes tradicionals de mineria de dades no tenen com a objectiu “entendre” el contingut d’un recurs, sinó que extreuen valors numèrics els quals esdevenen models en aplicar-hi càlculs estadístics, que només cobren sentit sota l’anàlisi manual d’un expert. Els darrers anys, motivat per la Web Semàntica, molts investigadors han proposat mètodes semàntics de classificació de dades capaços d’explotar recursos textuals a nivell conceptual. Malgrat això, normalment aquests mètodes depenen de recursos anotats prèviament per poder interpretar semànticament el contingut d’un document. L’ús d’aquests mètodes està estretament relacionat amb l’associació de dades i el seu significat. Aquest treball es centra en el desenvolupament d’una metodologia genèrica capaç de detectar els trets més rellevants d’un recurs textual descobrint la seva associació semàntica, es a dir, enllaçant-los amb conceptes modelats a una ontologia, i detectant els principals temes de discussió. Els mètodes proposats són no supervisats per evitar el coll d’ampolla generat per l’anotació manual, independents del domini (aplicables a qualsevol àrea de coneixement) i flexibles (capaços d’analitzar recursos heterogenis: documents textuals o documents semi-estructurats com els articles de la Viquipèdia o les publicacions de Twitter). El treball ha estat avaluat en els àmbits turístic i mèdic. Per tant, aquesta dissertació és un primer pas cap a l'anotació semàntica automàtica de documents necessària per possibilitar el camí cap a la visió de la Web Semàntica.La Web Social ha provocado un crecimiento exponencial de los contenidos disponibles, dejando enormes cantidades de recursos electrónicos que a menudo abruman a los usuarios. Tal volumen de información es de interés para la comunidad de minería de datos. Los algoritmos de minería de datos explotan características de las entidades para categorizarlas, agruparlas o clasificarlas según su semejanza. Los datos por sí mismos no aportan ningún significado: deben ser interpretados para convertirse en información. Los métodos tradicionales no tienen como objetivo "entender" el contenido de un recurso, sino que extraen valores numéricos que se convierten en modelos tras aplicar cálculos estadísticos, los cuales cobran sentido bajo el análisis manual de un experto. Actualmente, motivados por la Web Semántica, muchos investigadores han propuesto métodos semánticos de clasificación de datos capaces de explotar recursos textuales a nivel conceptual. Sin embargo, generalmente estos métodos dependen de recursos anotados previamente para poder interpretar semánticamente el contenido de un documento. El uso de estos métodos está estrechamente relacionado con la asociación de datos y su significado. Este trabajo se centra en el desarrollo de una metodología genérica capaz de detectar los rasgos más relevantes de un recurso textual descubriendo su asociación semántica, es decir, enlazándolos con conceptos modelados en una ontología, y detectando los principales temas de discusión. Los métodos propuestos son no supervisados para evitar el cuello de botella generado por la anotación manual, independientes del dominio (aplicables a cualquier área de conocimiento) y flexibles (capaces de analizar recursos heterogéneos: documentos textuales o documentos semi-estructurados, como artículos de la Wikipedia o publicaciones de Twitter). El trabajo ha sido evaluado en los ámbitos turístico y médico. Esta disertación es un primer paso hacia la anotación semántica automática de documentos necesaria para posibilitar el camino hacia la visión de la Web Semántica.Social Web technologies have caused an exponential growth of the documents available through the Web, making enormous amounts of textual electronic resources available. Users may be overwhelmed by such amount of contents and, therefore, the automatic analysis and exploitation of all this information is of interest to the data mining community. Data mining algorithms exploit features of the entities in order to characterise, group or classify them according to their resemblance. Data by itself does not carry any meaning; it needs to be interpreted to convey information. Classical data analysis methods did not aim to “understand” the content and the data were treated as meaningless numbers and statistics were calculated on them to build models that were interpreted manually by human domain experts. Nowadays, motivated by the Semantic Web, many researchers have proposed semantic-grounded data classification and clustering methods that are able to exploit textual data at a conceptual level. However, they usually rely on pre-annotated inputs to be able to semantically interpret textual data such as the content of Web pages. The usability of all these methods is related to the linkage between data and its meaning. This work focuses on the development of a general methodology able to detect the most relevant features of a particular textual resource finding out their semantics (associating them to concepts modelled in ontologies) and detecting its main topics. The proposed methods are unsupervised (avoiding the manual annotation bottleneck), domain-independent (applicable to any area of knowledge) and flexible (being able to deal with heterogeneous resources: raw text documents, semi-structured user-generated documents such Wikipedia articles or short and noisy tweets). The methods have been evaluated in different fields (Tourism, Oncology). This work is a first step towards the automatic semantic annotation of documents, needed to pave the way towards the Semantic Web vision

    An empirical study on the various stock market prediction methods

    Get PDF
    Investment in the stock market is one of the much-admired investment actions. However, prediction of the stock market has remained a hard task because of the non-linearity exhibited. The non-linearity is due to multiple affecting factors such as global economy, political situations, sector performance, economic numbers, foreign institution investment, domestic institution investment, and so on. A proper set of such representative factors must be analyzed to make an efficient prediction model. Marginal improvement of prediction accuracy can be gainful for investors. This review provides a detailed analysis of research papers presenting stock market prediction techniques. These techniques are assessed in the time series analysis and sentiment analysis section. A detailed discussion on research gaps and issues is presented. The reviewed articles are analyzed based on the use of prediction techniques, optimization algorithms, feature selection methods, datasets, toolset, evaluation matrices, and input parameters. The techniques are further investigated to analyze relations of prediction methods with feature selection algorithm, datasets, feature selection methods, and input parameters. In addition, major problems raised in the present techniques are also discussed. This survey will provide researchers with deeper insight into various aspects of current stock market prediction methods

    Information theoretic causality detection between financial and sentiment data

    Get PDF
    The interaction between the flow of sentiment expressed on blogs and media and the dynamics of the stock market prices are analyzed through an information-theoretic measure, the transfer entropy, to quantify causality relations. We analyzed daily stock price and daily social media sentiment for the top 50 companies in the Standard & Poor (S&P) index during the period from November 2018 to November 2020. We also analyzed news mentioning these companies during the same period. We found that there is a causal flux of information that links those companies. The largest fraction of significant causal links is between prices and between sentiments, but there is also significant causal information which goes both ways from sentiment to prices and from prices to sentiment. We observe that the strongest causal signal between sentiment and prices is associated with the Tech sector

    Review article: Detection of actionable tweets in crisis events

    Get PDF
    Messages on social media can be an important source of information during crisis situations. They can frequently provide details about developments much faster than traditional sources (e.g., official news) and can offer personal perspectives on events, such as opinions or specific needs. In the future, these messages can also serve to assess disaster risks. One challenge for utilizing social media in crisis situations is the reliable detection of relevant messages in a flood of data. Researchers have started to look into this problem in recent years, beginning with crowdsourced methods. Lately, approaches have shifted towards an automatic analysis of messages. A major stumbling block here is the question of exactly what messages are considered relevant or informative, as this is dependent on the specific usage scenario and the role of the user in this scenario. In this review article, we present methods for the automatic detection of crisis-related messages (tweets) on Twitter. We start by showing the varying definitions of importance and relevance relating to disasters, leading into the concept of use case-dependent actionability that has recently become more popular and is the focal point of the review paper. This is followed by an overview of existing crisis-related social media data sets for evaluation and training purposes. We then compare approaches for solving the detection problem based (1) on filtering by characteristics like keywords and location, (2) on crowdsourcing, and (3) on machine learning technique. We analyze their suitability and limitations of the approaches with regards to actionability. We then point out particular challenges, such as the linguistic issues concerning social media data. Finally, we suggest future avenues of research and show connections to related tasks, such as the subsequent semantic classification of tweets
    corecore