37 research outputs found

    Intelligent Data Mining Techniques for Automatic Service Management

    Get PDF
    Today, as more and more industries are involved in the artificial intelligence era, all business enterprises constantly explore innovative ways to expand their outreach and fulfill the high requirements from customers, with the purpose of gaining a competitive advantage in the marketplace. However, the success of a business highly relies on its IT service. Value-creating activities of a business cannot be accomplished without solid and continuous delivery of IT services especially in the increasingly intricate and specialized world. Driven by both the growing complexity of IT environments and rapidly changing business needs, service providers are urgently seeking intelligent data mining and machine learning techniques to build a cognitive ``brain in IT service management, capable of automatically understanding, reasoning and learning from operational data collected from human engineers and virtual engineers during the IT service maintenance. The ultimate goal of IT service management optimization is to maximize the automation of IT routine procedures such as problem detection, determination, and resolution. However, to fully automate the entire IT routine procedure is still a challenging task without any human intervention. In the real IT system, both the step-wise resolution descriptions and scripted resolutions are often logged with their corresponding problematic incidents, which typically contain abundant valuable human domain knowledge. Hence, modeling, gathering and utilizing the domain knowledge from IT system maintenance logs act as an extremely crucial role in IT service management optimization. To optimize the IT service management from the perspective of intelligent data mining techniques, three research directions are identified and considered to be greatly helpful for automatic service management: (1) efficiently extract and organize the domain knowledge from IT system maintenance logs; (2) online collect and update the existing domain knowledge by interactively recommending the possible resolutions; (3) automatically discover the latent relation among scripted resolutions and intelligently suggest proper scripted resolutions for IT problems. My dissertation addresses these challenges mentioned above by designing and implementing a set of intelligent data-driven solutions including (1) constructing the domain knowledge base for problem resolution inference; (2) online recommending resolution in light of the explicit hierarchical resolution categories provided by domain experts; and (3) interactively recommending resolution with the latent resolution relations learned through a collaborative filtering model

    Safe Exploration for Optimizing Contextual Bandits

    Get PDF
    Contextual bandit problems are a natural fit for many information retrieval tasks, such as learning to rank, text classification, recommendation, etc. However, existing learning methods for contextual bandit problems have one of two drawbacks: they either do not explore the space of all possible document rankings (i.e., actions) and, thus, may miss the optimal ranking, or they present suboptimal rankings to a user and, thus, may harm the user experience. We introduce a new learning method for contextual bandit problems, Safe Exploration Algorithm (SEA), which overcomes the above drawbacks. SEA starts by using a baseline (or production) ranking system (i.e., policy), which does not harm the user experience and, thus, is safe to execute, but has suboptimal performance and, thus, needs to be improved. Then SEA uses counterfactual learning to learn a new policy based on the behavior of the baseline policy. SEA also uses high-confidence off-policy evaluation to estimate the performance of the newly learned policy. Once the performance of the newly learned policy is at least as good as the performance of the baseline policy, SEA starts using the new policy to execute new actions, allowing it to actively explore favorable regions of the action space. This way, SEA never performs worse than the baseline policy and, thus, does not harm the user experience, while still exploring the action space and, thus, being able to find an optimal policy. Our experiments using text classification and document retrieval confirm the above by comparing SEA (and a boundless variant called BSEA) to online and offline learning methods for contextual bandit problems.Comment: 23 pages, 3 figure

    Dynamic allocation optimization in A/B tests using classification-based preprocessing

    Get PDF
    An A/B test evaluates the impact of a new technology by running it in a real production environment and testing its performance on a set of items. Recently, promising new methods are optimizing A/B tests with dynamic allocation. They allow for a quicker result regarding which variation (A or B) is the best, saving money for the user. However, dynamic allocation by traditional methods requires certain assumptions, which are not always verified in reality. This is mainly due to the fact that the populations tested are not homogeneous. This document reports on the new reinforcement learning methodology which has been deployed by the commercial A/B testing platform AB Tasty. We provide a new method that not only builds homogeneous groups for a user, but also allows to find the best variation for these groups in a short period of time. This paper provides numerical results on AB Tasty data, but also on public data sets, to demonstrate an improvement in A/B testing over traditional methods

    Thompson Sampling for Bandits with Clustered Arms

    Get PDF
    We propose algorithms based on a multi-level Thompson sampling scheme, for the stochastic multi-armed bandit and its contextual variant with linear expected rewards, in the setting where arms are clustered. We show, both theoretically and empirically, how exploiting a given cluster structure can significantly improve the regret and computational cost compared to using standard Thompson sampling. In the case of the stochastic multi-armed bandit we give upper bounds on the expected cumulative regret showing how it depends on the quality of the clustering. Finally, we perform an empirical evaluation showing that our algorithms perform well compared to previously proposed algorithms for bandits with clustered arms

    Learning from interaction: models and applications

    Get PDF
    A large proportion of Machine Learning (ML) research focuses on designing algorithms that require minimal input from the human. However, ML algo- rithms are now widely used in various areas of engineering to design and build systems that interact with the human user and thus need to “learn” from this interaction. In this work, we concentrate on algorithms that learn from user interaction. A significant part of the dissertation is devoted to learning in the bandit setting. We propose a general framework for handling dependencies across arms, based on the new assumption that the mean-reward function is drawn from a Gaussian Process. Additionally, we propose an alternative method for arm selection using Thompson sampling and we apply the new algorithms to a grammar learning problem. In the remainder of the dissertation, we consider content-based image re- trieval in the case when the user is unable to specify the required content through tags or other image properties and so the system must extract infor- mation from the user through limited feedback. We present a novel Bayesian approach that uses latent random variables to model the systems imperfect knowledge about the users expected response to the images. An impor- tant aspect of the algorithm is the incorporation of an explicit exploration- exploitation strategy in the image sampling process. A second aspect of our algorithm is the way in which its knowledge of the target image is updated given user feedback. We considered a few algorithms to do so: variational Bayes, Gibbs sampling and a simple uniform update. We show in experi- ments that the simple uniform update performs best. The reason is because, unlike the uniform update, both variational Bayes and Gibbs sampling tend to focus on a small set of images aggressively
    corecore