9 research outputs found

    Intelligent Business Process Optimization for the Service Industry

    Get PDF
    The company's sustainable competitive advantage derives from its capacity to create value for customers and to adapt the operational practices to changing situations. Business processes are the heart of each company. Therefore process excellence has become a key issue. This book introduces a novel approach focusing on the autonomous optimization of business processes by applying sophisticated machine learning techniques such as Relational Reinforcement Learning and Particle Swarm Optimization

    Intelligent Business Process Optimization for the Service Industry

    Get PDF
    The company\u27s sustainable competitive advantage derives from its capacity to create value for customers and to adapt the operational practices to changing situations. Business processes are the heart of each company. Therefore process excellence has become a key issue. This book introduces a novel approach focusing on the autonomous optimization of business processes by applying sophisticated machine learning techniques such as Relational Reinforcement Learning and Particle Swarm Optimization

    Decision-Making with Multi-Step Expert Advice on the Web

    Get PDF
    This thesis deals with solving multi-step tasks by using advice from experts, which are algorithms to solve individual steps of such tasks. We contribute with methods for maximizing the number of correct task solutions by selecting and combining experts for individual task instances and methods for automating the process of solving tasks on the Web, where experts are available as Web services. Multi-step tasks frequently occur in Natural Language Processing (NLP) or Computer Vision, and as research progresses an increasing amount of exchangeable experts for the same steps are available on the Web. Service provider platforms such as Algorithmia monetize expert access by making expert services available via their platform and having customers pay for single executions. Such experts can be used to solve diverse tasks, which often consist of multiple steps and thus require pipelines of experts to generate hypotheses. We perceive two distinct problems for solving multi-step tasks with expert services: (1) Given that the task is sufficiently complex, no single pipeline generates correct solutions for all possible task instances. One thus must learn how to construct individual expert pipelines for individual task instances in order to maximize the number of correct solutions, while also taking into account the costs adhered to executing an expert. (2) To automatically solve multi-step tasks with expert services, we need to discover, execute and compose expert pipelines. With mostly textual descriptions of complex functionalities and input parameters, Web automation entails to integrate available expert services and data, interpreting user-specified task goals or efficiently finding correct service configurations. In this thesis, we present solutions to both problems: (1) We enable to learn well-performing expert pipelines assuming available reference data sets (comprising a number of task instances and solutions), where we distinguish between centralized and decentralized decision-making. We formalize the problem as specialization of a Markov Decision Process (MDP), which we refer to as Expert Process (EP) and integrate techniques from Statistical Relational Learning (SRL) or Multiagent coordination. (2) We develop a framework for automatically discovering, executing and composing expert pipelines by exploiting methods developed for the Semantic Web. We lift the representations of experts with structured vocabularies modeled with the Resource Description Framework (RDF) and extend EPs to Semantic Expert Processes (SEPs) to enable the data-driven execution of experts in Web-based architectures. We evaluate our methods in different domains, namely Medical Assistance with tasks in Image Processing and Surgical Phase Recognition, and NLP for textual data on the Web, where we deal with the task of Named Entity Recognition and Disambiguation (NERD)

    Multi-agent Relational Reinforcement Learning

    No full text

    Multi-agent relational reinforcement learning

    No full text
    Abstract. In this paper we study Relational Reinforcement Learning in a multi-agent setting. There is growing evidence in the Reinforcement Learning research community that a relational representation of the state space has many benefits over a propositional one. Complex tasks as planning or information retrieval on the web can be represented more naturally in relational form. Yet, this relational structure has not been exploited for multi-agent reinforcement learning tasks and has only been studied in a single agent context so far. This paper is a first attempt in bridging the gap between Relation Reinforcement Learning (RRL) and Multi-agent Systems (MAS). More precisely, we will explore how a relational structure of the state space can be used in a Multi-Agent Reinforcement Learning context.

    Multi-agent relational reinforcement learning Explorations in multi-state coordination tasks

    No full text
    In this paper we report on using a relational state space in multi-agent reinforcement learning. There is growing evidence in the Reinforcement Learning research community that a relational representation of the state space has many benefits over a propositional one. Complex tasks as planning or information retrieval on the web can be represented more naturally in relational form. Yet, this relational structure has not been exploited for multi-agent reinforcement learning tasks and has only been studied in a single agent context so far. In this paper we explore the powerful possibilities of using Relational Reinforcement Learning (RRL) in complex multi-agent coordination tasks. More precisely, we consider an abstract multi-state coordination problem, which can be considered as a variation and extension of repeated stateless Dispersion Games. Our approach shows that RRL allows to represent a complex state space in a multi-agent environment more compactly and allows for fast convergence of learning agents, Moreover, with this technique, agents are able to make complex interactive models (in the sense of learning from an expert), to predict what other agents will do and generalize over this model. This enables to solve complex multi-agent planning tasks, in which agents need to be adaptive and learn, with more powerful tools.status: publishe

    Multi-agent relational reinforcement learning. Explorations in multi-state coordination tasks

    No full text
    Abstract. In this paper we report on using a relational state space in multi-agent reinforcement learning. There is growing evidence in the Reinforcement Learning research community that a relational representation of the state space has many benefits over a propositional one. Complex tasks as planning or information retrieval on the web can be represented more naturally in relational form. Yet, this relational structure has not been exploited for multi-agent reinforcement learning tasks and has only been studied in a single agent context so far. In this paper we explore the powerful possibilities of using Relational Reinforcement Learning (RRL) in complex multi-agent coordination tasks. More precisely, we consider an abstract multi-state coordination problem, which can be considered as a variation and extension of repeated stateless Dispersion Games. Our approach shows that RRL allows to represent a complex state space in a multi-agent environment more compactly and allows for fast convergence of learning agents. Moreover, with this technique, agents are able to make complex interactive models (in the sense of learning from an expert), to predict what other agents will do and generalize over this model. This enables to solve complex multi-agent planning tasks, in which agents need to be adaptive and learn, with more powerful tools.

    Multi-agent relational reinforcement learning. Explorations in multi-state coordination tasks

    No full text
    Abstract. In this paper we report on using a relational state space in multi-agent reinforcement learning. There is growing evidence in the Reinforcement Learning research community that a relational representation of the state space has many benefits over a propositional one. Complex tasks as planning or information retrieval on the web can be represented more naturally in relational form. Yet, this relational structure has not been exploited for multi-agent reinforcement learning tasks and has only been studied in a single agent context so far. In this paper we explore the powerful possibilities of using Relational Reinforcement Learning (RRL) in complex multi-agent coordination tasks. More precisely, we consider an abstract multi-state coordination problem, which canbeconsideredasavariationandextensionofrepeatedstatelessDispersion Games. Our approach shows that RRL allows to represent a complex state space in a multi-agent environment more compactly and allows for fast convergence of learning agents. Moreover, with this technique, agents are able to make complex interactive models (in the sense of learning from an expert), to predict what other agents will do and generalize over this model. This enables to solve complex multi-agent planning tasks, in which agents need to be adaptive and learn, with more powerful tools.
    corecore