156 research outputs found

    Using Linear Programming in a Business-to-Business Auction Mechanism

    Get PDF
    Business to business interactions are largely centered around contracts for procurement or for distribution. Negotiations and sealed bid tendering are the most common techniques used for price discovery and generating the terms and conditions for contracts. Sealed bid tenders collect bids (that is private information between the two companies) and then pick a winning bid/s from among the submitted bids. The outcome of such interactions can be analyzed based on the theory of sealed bid auctions and have been studied extensively [7]. In contrast, negotiations tend to be more dynamic where a buyer (supplier) might be interacting with several suppliers (buyers) simultaneously and the contractual terms being negotiated with one supplier might directly impact the negotiations with another.An approach that is often used for this setting is to design an interactive mechanism where based on a "market signal" such as price for each item, the agents can propose bids based on a decentralized private cost model. A general setting for decentralized allocation is one where there are multiple agents with a utility function for the different resources and the allocation problem is to distribute the resources in an optimal way. A key difference from classical optimization is that the utility functions of the agents are private information and are not explicitly known to the decision maker. The key requirements for such a design to be practical are: (i) convergence to an "equilibrium solution" in a finite number of steps, and (ii) the "equilibrium solution" is optimal for each of the agents, given the market signal. One approach for implementing such mechanisms is the use of primal-dual approaches where the resource allocation problem is formulated as a linear program and the dual prices are used as market signals |2, 3, 8, 1, 4, 6|. Each agent can then use the dual price vector to propose a profit maximizing bid, for the next round, based on her private cost model. Here, the assumption is that the agents attempt to maximize their profits in each round. This assumption is referred to as the myopic best response |5|. In a procurement setting with a single buyer and multiple suppliers, the buyer uses a linear program to allocate her demand by choosing a set of cost minimizing bids and then use the dual price variables to signal the suppliers. In order to guarantee convergence a large enough price decrement is used on all non-zero dual prices in each iteration.In this paper we explore an alternate design where, the market signal provided to each supplier is based on the current cost of procurement for the buyer. Each supplier is then required to submit new bid proposals that reduce the procurement cost (assuming other suppliers keep their bids unchanged) by some large enough decrement d > a. We show that, for each supplier, generating a profit maximizing bid that decreases the procurement cost for the buyer by at least d can be done in polynomial time. This implies that in designs where the bids are not common knowledge, each supplier and the buyer can engage in an "algorithmic conversation" to identify such proposals in a polynomial number of steps. In addition, we show that such a mechanism converges to an "equilibrium solution" where all the suppliers are at their profit maximizing solution given the cost and the required decrement d. At the heart of this design lies a fundamental sensitivity analysis problem of linear programming - given a linear program and its optimal solution, identify the set of new columns such that any one of these columns when introduced in the linear program reduces the optimum solution by at least d.

    Land Cover/Land Use Change: Exploring the Impacts on the Sahariya Tribe of Rajasthan, India

    Get PDF
    The present study explored the changes in forest cover in one tribal region, village of Khanda Sharol, within the state of Rajasthan, India; and examined how these changes have affected access to and the use of Non-timber Forest Products (NTFP) by Sahariya tribal households. The study also examined the implications of changes in the access to and use of NTFP on the livelihood of tribal members and the feasibility of continuing a community-based management system for the sustainable production of NTFPs. This was a descriptive study. Historical, as well as current data was collected through surveys and interviews. A family information report survey covering various dimensions was administered to each of 365 households of the Khanda Sharol village. Individual interviews and focus groups with tribal members were conducted to gather information regarding NTFP collection patterns (past and present) and details of forest proximity. This collective study indicates that there was a decline in forest cover which resulted in a loss of compilation of NTFP. Furthermore, there was a decline in the livelihoods of the residents of the village, although a direct and unequivocal link between change in forest cover and livelihood patterns cannot be established. These relationships are complex and simple causal relationships cannot easily be drawn. Nonetheless, this research has been able to identify how changes in the forest cover over the past 50 years have affected access and use of NTFP of the tribal households in the village. In turn these changes suggest shifts in household economic production which then can be tied to poverty, health and education of tribal members

    QoS-Aware Middleware for Web Services Composition

    Get PDF
    The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online Business-to-Business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different Quality of Service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming

    A Time Series is Worth 64 Words: Long-term Forecasting with Transformers

    Full text link
    We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy. Code is available at: https://github.com/yuqinie98/PatchTST

    TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting

    Full text link
    Transformers have gained popularity in time series forecasting for their ability to capture long-sequence interactions. However, their high memory and computing requirements pose a critical bottleneck for long-term forecasting. To address this, we propose TSMixer, a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules. TSMixer is designed for multivariate forecasting and representation learning on patched time series, providing an efficient alternative to Transformers. Our model draws inspiration from the success of MLP-Mixer models in computer vision. We demonstrate the challenges involved in adapting Vision MLP-Mixer for time series and introduce empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a Hybrid channel modeling approach to effectively handle noisy channel interactions and generalization across diverse datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is introduced in the backbone to prioritize important features. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X).Comment: Accepted in the Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 23), Research Track. Delayed release in arXiv to comply with the conference policies on the double-blind review process. This paper has been submitted to the KDD peer-review process on Feb 02, 202

    A computational study of the Kemeny rule for preference aggregation

    Get PDF
    Abstract We consider from a computational perspective the problem of how to aggregate the ranking preferences of a number of alternatives by a number of different voters into a single consensus ranking, following the majority voting rule. Social welfare functions for aggregating preferences in this way have been widely studied since the time of Condorcet (1785). One drawback of majority voting procedures when three or more alternatives are being ranked is the presence of cycles in the majority preference relation. The Kemeny order is a social welfare function which has been designed to tackle the presence of such cycles. However computing a Kemeny order is known to be NP-hard. We develop a greedy heuristic and an exact branch and bound procedure for computing Kemeny orders. We present results of a computational study on these procedures
    • …
    corecore