498 research outputs found

    Cost-Effective Incentive Allocation via Structured Counterfactual Inference

    Full text link
    We address a practical problem ubiquitous in modern marketing campaigns, in which a central agent tries to learn a policy for allocating strategic financial incentives to customers and observes only bandit feedback. In contrast to traditional policy optimization frameworks, we take into account the additional reward structure and budget constraints common in this setting, and develop a new two-step method for solving this constrained counterfactual policy optimization problem. Our method first casts the reward estimation problem as a domain adaptation problem with supplementary structure, and then subsequently uses the estimators for optimizing the policy with constraints. We also establish theoretical error bounds for our estimation procedure and we empirically show that the approach leads to significant improvement on both synthetic and real datasets

    RESEARCH ISSUES CONCERNING ALGORITHMS USED FOR OPTIMIZING THE DATA MINING PROCESS

    Get PDF
    In this paper, we depict some of the most widely used data mining algorithms that have an overwhelming utility and influence in the research community. A data mining algorithm can be regarded as a tool that creates a data mining model. After analyzing a set of data, an algorithm searches for specific trends and patterns, then defines the parameters of the mining model based on the results of this analysis. The above defined parameters play a significant role in identifying and extracting actionable patterns and detailed statistics. The most important algorithms within this research refer to topics like clustering, classification, association analysis, statistical learning, link mining. In the following, after a brief description of each algorithm, we analyze its application potential and research issues concerning the optimization of the data mining process. After the presentation of the data mining algorithms, we will depict the most important data mining algorithms included in Microsoft and Oracle software products, useful suggestions and criteria in choosing the most recommended algorithm for solving a mentioned task, advantages offered by these software products.data mining optimization, data mining algorithms, software solutions

    Efficient advert assignment

    Get PDF
    We develop a framework for the analysis of large-scale ad auctions where adverts are assigned over a continuum of search types. For this pay-per-click market, we provide an efficient mechanism that maximizes social welfare. In particular, we show that the social welfare optimization can be solved in separate optimizations conducted on the time scales relevant to the search platform and advertisers. Here, on each search occurrence, the platform solves an assignment problem and, on a slower time scale, each advertiser submits a bid that matches its demand for click-throughs with supply. Importantly, knowledge of global parameters, such as the distribution of search terms, is not required when separating the problem in this way. Exploiting the information asymmetry between the platform and advertiser, we describe a simple mechanism that incentivizes truthful bidding, has a unique Nash equilibrium that is socially optimal, and thus implements our decomposition. Further, we consider models where advertisers adapt their bids smoothly over time and prove convergence to the solution that maximizes social welfare. Finally, we describe several extensions that illustrate the flexibility and tractability of our framework.The research of Neil Walton was funded by the VENI research programme, which is financed by the Netherlands Organisation for Scientific Research (NWO)

    An integrated model for warehouse and inventory planning

    Get PDF
    We propose a tactical model which integrates the replenishment decision in inventory management, the allocation of products to warehousing systems and the assignment of products to storage locations in warehousing management. The purpose of this article is to analyse the value of integrating warehouse and inventory decisions. This is achieved by proposing two methods for solving this tactical integrated model which differ in the level of integration of the inventory and warehousing decisions. A computational analysis is performed on a real world database and using multiple scenarios differing by the warehouse capacity limits. Our observation is that the total cost of the inventory and warehousing systems can be reduced drastically by taking into account the warehouse capacity restrictions in the inventory planning decisions, in an aggregate way. Moreover additional inventory and warehouse savings can be achieved by using more sophisticated integration methods for inventory and warehousing decisions

    Competitive and Cooperative Approaches to the Balancing Market in Distribution Grids

    Get PDF
    The electrical grid has been changing in the last decade due to the presence, at the distribution level, of renewables, distributed generation, storage systems, microgrids, and electric vehicles. The introduction of new legislation and actors in the smart grid\u2019s system opens new challenges for the activities of companies, and the development of new energy management systems, models, and methods. In order to face this revolution, new market structures are being defined as well as new technologies and optimization and control algorithms for the management of distributed resources and the coordination of local users to contribute to active power reserve and ancillary services. One of the main problems for an electricity market operator that also owns the distribution grid is to avoid congestions and maximize the quality of the service provided. The thesis concerns the development and application of new methods for the optimization of network systems (with multi-decision makers) with particular attention to the case of power distribution networks This Ph.D. thesis aims to address the current lack of properly defined market structures for the determination of balancing services in distribution networks. As a first study, to be able to handle the power flow equation in a computationally better way, a new convex relaxation has been proposed. Thereafter, two opposite types of market structure have been developed: competitive and cooperative. The first structure presents a two-tier mechanism where the market operator is in a predominant position compared to other market players. Vice versa in the cooperative mechanism (solved through distributed optimization techniques ) all actors are on the same level and work together for social welfare. The main methodological novelties of the proposed work are to solve complex problems with formally correct and computationally efficient techniques

    Variational Fair Clustering

    Full text link
    We propose a general variational framework of fair clustering, which integrates an original Kullback-Leibler (KL) fairness term with a large class of clustering objectives, including prototype or graph based. Fundamentally different from the existing combinatorial and spectral solutions, our variational multi-term approach enables to control the trade-off levels between the fairness and clustering objectives. We derive a general tight upper bound based on a concave-convex decomposition of our fairness term, its Lipschitz-gradient property and the Pinsker's inequality. Our tight upper bound can be jointly optimized with various clustering objectives, while yielding a scalable solution, with convergence guarantee. Interestingly, at each iteration, it performs an independent update for each assignment variable. Therefore, it can be easily distributed for large-scale datasets. This scalability is important as it enables to explore different trade-off levels between the fairness and clustering objectives. Unlike spectral relaxation, our formulation does not require computing its eigenvalue decomposition. We report comprehensive evaluations and comparisons with state-of-the-art methods over various fair-clustering benchmarks, which show that our variational formulation can yield highly competitive solutions in terms of fairness and clustering objectives.Comment: Accepted to be published in AAAI 2021. The Code is available at: https://github.com/imtiazziko/Variational-Fair-Clusterin

    A submodular optimization framework for never-ending learning : semi-supervised, online, and active learning.

    Get PDF
    The revolution in information technology and the explosion in the use of computing devices in people\u27s everyday activities has forever changed the perspective of the data mining and machine learning fields. The enormous amounts of easily accessible, information rich data is pushing the data analysis community in general towards a shift of paradigm. In the new paradigm, data comes in the form a stream of billions of records received everyday. The dynamic nature of the data and its sheer size makes it impossible to use the traditional notion of offline learning where the whole data is accessible at any time point. Moreover, no amount of human resources is enough to get expert feedback on the data. In this work we have developed a unified optimization based learning framework that approaches many of the challenges mentioned earlier. Specifically, we developed a Never-Ending Learning framework which combines incremental/online, semi-supervised, and active learning under a unified optimization framework. The established framework is based on the class of submodular optimization methods. At the core of this work we provide a novel formulation of the Semi-Supervised Support Vector Machines (S3VM) in terms of submodular set functions. The new formulation overcomes the non-convexity issues of the S3VM and provides a state of the art solution that is orders of magnitude faster than the cutting edge algorithms in the literature. Next, we provide a stream summarization technique via exemplar selection. This technique makes it possible to keep a fixed size exemplar representation of a data stream that can be used by any label propagation based semi-supervised learning technique. The compact data steam representation allows a wide range of algorithms to be extended to incremental/online learning scenario. Under the same optimization framework, we provide an active learning algorithm that constitute the feedback between the learning machine and an oracle. Finally, the developed Never-Ending Learning framework is essentially transductive in nature. Therefore, our last contribution is an inductive incremental learning technique for incremental training of SVM using the properties of local kernels. We demonstrated through this work the importance and wide applicability of the proposed methodologies
    • 

    corecore