1,565 research outputs found

    Smart communications network management through a synthesis of distributed intelligence and information

    Full text link
    Demands on communications networks to support bundled, interdependent communications services (data, voice, video) are increasing in complexity. Smart network management techniques are required to meet this demand. Such management techniques are envisioned to be based on two main technologies: (i) embedded intelligence; and (ii) up-to-the-millisecond delivery of performance information. This paper explores the idea of delivery of intelligent network management as a synthesis of distributed intelligence and information, obtained through information mining of network performance. © 2008 International Federation for Information Processing

    Eighth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, Aarhus, Denmark, October 22-24, 2007

    Get PDF
    This booklet contains the proceedings of the Eighth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 22-24, 2007. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0

    Sentiment analysis in electronic negotiations

    Get PDF
    The thesis analyzes the applicability of methods of Sentiment Analysis and Predictive Analytics on textual communication in electronic negotiation transcripts. In particular, the thesis focuses on examining whether an automatic classifier can predict the outcome of ongoing, asynchronous electronic negotiations with sufficient accuracy. When combined with influencing factors leading to the specific classification decision, such a classification model could be incorporated into a Negotiation Support System in order to proactively intervene in ongoing negotiations it judges as likely to fail and then to give advice to the negotiators to prevent negotiation failure. To achieve this goal, an existing data set of electronic negotiations was used in a first study to create a Sentiment Lexicon, which tracks verbal indicators for utterances of positive and, respectively, negative polarity. This lexicon was subsequently combined with a simplified, feature-based representation of electronic negotiation transcripts which was then used as training data for various machine learning classifiers in order to let them determine the outcome of the negotiations based on the transcripts in a second study. Here, complete negotiation transcripts were classified as well as partial transcrips in order to assess classification quality in ongoing negotiations. The third study of the thesis sought to refine the classification model with respect to sentence-based granularity. To this end, human coders were classifying negotiation sentences regarding their subjectivity and polarity. The results of this content analysis approach were then used to train sentence-level subjectivity and polarity classifiers. The fourth and final study analyzed different aggregation methods for these sentence-level classification results in order to support the classifiers on negotiation granularity. Different aggregation and classification models were discussed, applied to the negotiation data and subsequently evaluated. The results of the studies show that it is possible to a certain degree to use a sentiment-based representation of negotiation data to automatically determine negotiation outcomes. In combination with the sentence-based classification models, negotiation classification quality increased further. However, this improvement was only found to be significant for complete negotiation transcripts. If only partial transcripts are used specifically to simulate an ongoing negotiation scenario the models tend to behave more erratic and classifcation quality depletes. This result yields the assumption that polarized utterances (positive as well as negative) only carry unequivocal information (with respect to the outcome) towards the end of the negotiation. During the negotiation, the influence of these utterances becomes more ambiguous, hence decreasing classification accuracy on models using a representation based on sentiments. Regarding the original goal of the thesis, which is to provide a basic means to support ongoing negotiations, this means that supporting mechanisms employed by a Negotiation Support System should focus on moderation techniques and resolving of potentially conflicting situations. Approaches that could be used to employ further conflict diagnosis in interaction with the negotiators are given in the final chapter of the thesis, as well as a discussion of potential recommendations and advice the system could give and lastly, approaches to visualize the classification data to the negotiators.Im Rahmen der Arbeit wurde die Anwendbarkeit von Methoden der Sentiment Analysis und Predictive Analytics auf textuelle Kommunikation in elektronischen Verhandlungen untersucht. Insbesondere sollte ermittelt werden, ob ein automatisiertes Klassifikationsverfahren in laufenden, asynchron geführten elektronischen Verhandlungen mit hinreichender Genauigkeit den Verhandlungsausgang vorhersagen kann. Eine solche Klassifikation, kombiniert mit den Einflussfaktoren, die zu der entsprechenden Klassifikation geführt haben, könnte dann im Rahmen eines Verhandlungsunterstützungssystems genutzt werden, um proaktiv in die Verhandlung einzugreifen um ggf. einen erfolglosen Ausgang der Verhandlung zu verhindern. Basierend auf einem existierenden Datensatz elektronischer Verhandlungen wurde hierzu in einer ersten Studie ein sogenanntes Sentiment-Lexikon erstellt, welches Indikatoren für positive bzw. negative Äußerungen sammelt. Dieses Lexikon sowie eine vereinfachte, Feature-basierte Repräsentation der Verhandlungsdaten diente in einer zweiten Studie als Grundlage, um maschinelle Lernverfahren zu trainieren, die das Resultat der Verhandlung basierend auf den textuellen Daten ermitteln sollten. Die Verfahren wurden sowohl auf vollständigen als auch auf partiellen Verhandlungstranskripten angewendet, um die Klassifikationsqualität in laufenden Verhandlungen bestimmen zu können. Im Rahmen einer dritten Studie wurde eine Verfeinerung des Lernverfahrens auf der Granularität einzelner Sätze durchgeführt. Hierzu wurden Sätze aus Verhandlungen von menschlichen Codern hinsichtlich Subjektivität vs. Objektivität und Polarität (positiv vs. negativ) bewertet. Die Resultate dieser Inhaltsanalyse dienten als Input für maschinelle Lernverfahren, die automatisiert Sätze bezüglich der beiden genannten Dimensionen klassifizieren. In einer finalen Integrationsstudie wurden die Ergebnisse der Klassifikationsverfahren auf Satz-Ebene aggregiert und verwendet um die Klassifikation auf Verhandlungsebene zu unterstützen. Hierbei wurden verschiedene Alternativen zur Aggregation durchgeführt und bewertet. Die Resultate der einzelnen Studien zeigen, dass es mit Abstrichen möglich ist, mit einer Sentiment-basierten Repräsentation von Verhandlungsdaten das Ergebnis einer Verhandlung vorherzusagen. Insbesondere wenn die Klassifikationsmodelle mit feingranularen Informationen angereichert werden, steigt die Qualität der Vorhersage für einzelne Modelle weiter signifikant an. Dies trifft jedoch nur auf Transkripte vollständiger Verhandlungen zu werden nur partielle Transkripte verwendet im Sinne einer möglichst frühzeitigen Vorhersage des Resultats verhalten sich die Modelle erratischer und die Genauigkeit degeneriert. Die mit diesem Resultat verbundene Annahme ist, dass polarisierte Äußerungen (positiv wie negativ) in erster Linie gegen Ende der Verhandlung eindeutige Informationen liefern insbesondere Sentiments in der Mitte der Transkripte scheinen der Klassifikationsqualität eher abträglich. Für konkrete proaktive Unterstützungsmaßnahmen, die ein Verhandlungsunterstützungssystem zu diesem Zeitpunkt ergreifen kann bedeutet dies in erster Linie, dass diese Maßnahmen im Falle dass die Verhandlung zu scheitern droht auf eine Moderation und Auflösung eventueller Konfliktsituationen abzielen sollten. Hierzu werden im Rahmen des Ausblicks in der Thesis ausführlich denkbare Ansätze zur weiteren Konfliktdiagnose in Interaktion mit den Nutzern, Ansätze für Empfehlungen und Ratschlägen, die das System geben kann, sowie Visualisierungsansätze diskutiert

    Towards Integration of Cognitive Models in Dialogue Management: Designing the Virtual Negotiation Coach Application

    Get PDF
    This paper presents an approach to flexible and adaptive dialogue management driven by cognitive modelling of human dialogue behaviour. Artificial intelligent agents, based on the ACT-R cognitive architecture, together with human actors are participating in a (meta)cognitive skills training within a negotiation scenario. The agent  employs instance-based learning to decide about its own actions and to reflect on the behaviour of the opponent. We show that task-related actions can be handled by a cognitive agent who is a plausible dialogue partner.  Separating task-related and dialogue control actions enables the application of sophisticated models along with a flexible architecture  in which  various alternative modelling methods can be combined. We evaluated the proposed approach with users assessing  the relative contribution of various factors to the overall usability of a dialogue system. Subjective perception of effectiveness, efficiency and satisfaction were correlated with various objective performance metrics, e.g. number of (in)appropriate system responses, recovery strategies, and interaction pace. It was observed that the dialogue system usability is determined most by the quality of agreements reached in terms of estimated Pareto optimality, by the user's negotiation strategies selected, and by the quality of system recognition, interpretation and responses. We compared human-human and human-agent performance with respect to the number and quality of agreements reached, estimated cooperativeness level, and frequency of accepted negative outcomes. Evaluation experiments showed promising, consistently positive results throughout the range of the relevant scales

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Context aware Q-Learning-based model for decision support in the negotiation of energy contracts

    Get PDF
    [EN] Automated negotiation plays a crucial role in the decision support for bilateral energy transactions. In fact, an adequate analysis of past actions of opposing negotiators can improve the decision-making process of market players, allowing them to choose the most appropriate parties to negotiate with in order to increase their outcomes. This paper proposes a new model to estimate the expected prices that can be achieved in bilateral contracts under a specific context, enabling adequate risk management in the negotiation process. The proposed approach is based on an adaptation of the Q-Learning reinforcement learning algorithm to choose the best scenario (set of forecast contract prices) from a set of possible scenarios that are determined using several forecasting and estimation methods. The learning process assesses the probability of occurrence of each scenario, by comparing each expected scenario with the real scenario. The final chosen scenario is the one that presents the higher expected utility value. Besides, the learning method can determine which is the best scenario for each context, since the behaviour of players can change according to the negotiation environment. Consequently, these conditions influence the final contract price of negotiations. This approach allows the supported player to be prepared for the negotiation scenario that is the most probable to represent a reliable approximation of the actual negotiation environme

    Proceedings of RSEEM 2006 : 13th Research Symposium on Emerging Electronic Markets

    Get PDF
    Electronic markets have been a prominent topic of research for the past decade. Moreover, we have seen the rise but also the disappearance of many electronic marketplaces in practice. Today, electronic markets are a firm component of inter-organisational exchanges and can be observed in many branches. The Research Symposium on Emerging Electronic Markets is an annual conference bringing together researchers working on various topics concerning electronic markets in research and practice. The focus theme of the13th Research Symposium on Emerging Electronic Markets (RSEEM 2006) was ?Evolution in Electronic Markets?. Looking back at more than 10 years of research activities in electronic markets, the evolution can be well observed. While electronic commerce activities were based largely on catalogue-based shopping, there are now many examples that go beyond pure catalogues. For example, dynamic and flexible electronic transactions such as electronic negotiations and electronic auctions are enabled. Negotiations and auctions are the basis for inter-organisational trade exchanges about services as well as products. Mass customisation opens up new opportunities for electronic markets. Multichannel electronic commerce represents today?s various requirements posed on information and communication technology as well as on organisational structures. In recent years, service-oriented architectures of electronic markets have enabled ICT infrastructures for supporting flexible e-commerce and e-market solutions. RSEEM 2006 was held at the University of Hohenheim, Stuttgart, Germany in September 2006. The proceedings show a variety of approaches and include the selected 8 research papers. The contributions cover the focus theme through conceptual models and systems design, application scenarios as well as evaluation research approaches

    Natural language processing and financial markets: semi-supervised modelling of coronavirus and economic news

    Get PDF
    Este documento estudia las reacciones de los mercados financieros de Estados Unidos a nuevas noticias de la prensa desde enero de 2019 hasta el primero de mayo de 2020. Con este fin, construimos medidas del contenido y del sentimiento de las noticias mediante el desarrollo de índices apropiados a partir de los titulares y fragmentos de The New York Times, utilizando técnicas de aprendizaje automático no supervisado. En particular, usamos el modelo Asignación Latente de Dirichlet para inferir el contenido (temas) de los artículos, y Word Embedding (implementado con el modelo Skip-gram) y K-Medias para medir su sentimiento (incertidumbre). De esta forma, elaboramos un conjunto de índices de incertidumbre temáticos diarios. Estos índices se utilizan luego para explicar el comportamiento de los mercados financieros de Estados Unidos mediante la implementación de un conjunto de modelos EGARCH. En conclusión, encontramos que dos de los índices de incertidumbre temáticos (uno relacionado con noticias del COVID-19 y otro con noticias de la guerra comercial) explican gran parte de los movimientos en los mercados financieros desde principios de 2019 hasta los cuatro primeros meses de 2020. Además, encontramos que el índice de incertidumbre temático relacionado con la economía y la Reserva Federal está positivamente relacionado con los mercados financieros, capturando las acciones de la Reserva Federal durante períodos de incertidumbre.This paper investigates the reactions of US financial markets to press news from January 2019 to 1 May 2020. To this end, we deduce the content and sentiment of the news by developing apposite indices from the headlines and snippets of The New York Times, using unsupervised machine learning techniques. In particular, we use Latent Dirichlet Allocation to infer the content (topics) of the articles, and Word Embedding (implemented with the Skip-gram model) and K-Means to measure their sentiment (uncertainty). In this way, we arrive at the definition of a set of daily topic-specific uncertainty indices. These indices are then used to find explanations for the behaviour of the US financial markets by implementing a batch of EGARCH models. In substance, we find that two topic-specific uncertainty indices, one related to COVID-19 news and the other to trade war news, explain the bulk of the movements in the financial markets from the beginning of 2019 to end-April 2020. Moreover, we find that the topic-specific uncertainty index related to the economy and the Federal Reserve is positively related to the financial markets, meaning that our index is able to capture actions of the Federal Reserve during periods of uncertainty
    corecore