86,942 research outputs found
Trade & Cap: A Customer-Managed, Market-Based System for Trading Bandwidth Allowances at a Shared Link
We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" â both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.National Science Foundation (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, and CNS-0520166); Universidad Pontificia Bolivariana and COLCIENCIASâInstituto Colombiano para el Desarrollo de la Ciencia y la TecnologĂa âFrancisco Jose Ì de Caldasâ
Digital Monitoring and its Effects on Organizational Performance
In every successful organization, the critical factors of time, cost, and quality preservation are paramount. Effectively managing and controlling these factors necessitates the implementation of strategic measures. Specifically, the reduction of wastage emerges as a key approach to conserving time, cost, and quality. Achieving this goal hinges on the optimal utilization of organizational resources, which entails precise planning, allocation, monitoring, and control.
While various methods for planning and resource tracking exist across organizations, this study focuses on strategies employed within the manufacturing industry. These strategies have demonstrated greater efficiency compared to traditional methods. Moreover, the study proposes the integration of Internet of Things (IoT) technology to address this challenge effectively.
The research recommends the use of IoT technology as a comprehensive solution. Prior studies have often utilized the JIT method solely for resource utilization or TPM method solely for resource management. In contrast, this research advocates for the individual application of these methods to plan each resource meticulously. Specifically, the JIT method is proposed for material utilization, the TPM method for equipment utilization, and the Kaizen method for labor allocation. Furthermore, it emphasizes the integration of IoT with these lean methods. While some researchers have explored IoT, they have not fully integrated it with lean methods and techniques. The synergy of lean production methods and IoT technology offers an ideal opportunity for optimizing the utilization of organizational resources.
Through these techniques, organizational resources can be efficiently planned and allocated to the production process. IoT provides valuable tools such as sensors, which can be installed at various resources, facilitating real-time data transmission to managers. This enables remote monitoring from office settings and timely data acquisition, thus addressing the challenge of optimal organizational resource utilization effectively
Customer Relationship Management : A Databased Approach
Customer Relationship Management: A Databased Approach offers the promise of maximized profits for today\u27s highly competitive businesses. This innovative book provides readers with the tools and techniques to effectively use CRM. It emphasizes the utilization of database marketing in order to build strong and profitable customer relationships. Kumar first describes how to implement database marketing and then looks at recent advances in CRM applications. Critical marketing issues like optimum resource allocation, purchase sequence, and the link between acquisition, retentions, and profitability are also examined on the basis of empirical finding
NOMA based resource allocation and mobility enhancement framework for IoT in next generation cellular networks
With the unprecedented technological advances witnessed in the last two decades, more devices are connected to the internet, forming what is called internet of things (IoT). IoT devices with heterogeneous characteristics and quality of experience (QoE) requirements may engage in dynamic spectrum market due to scarcity of radio resources. We propose a framework to efficiently quantify and supply radio resources to the IoT devices by developing intelligent systems. The primary goal of the paper is to study the characteristics of the next generation of cellular networks with non-orthogonal multiple access (NOMA) to enable connectivity to clustered IoT devices. First, we demonstrate how the distribution and QoE requirements of IoT devices impact the required number of radio resources in real time. Second, we prove that using an extended auction algorithm by implementing a series of complementary functions, enhance the radio resource utilization efficiency. The results show substantial reduction in the number of sub-carriers required when compared to conventional orthogonal multiple access (OMA) and the intelligent clustering is scalable and adaptable to the cellular environment. Ability to move spectrum usages from one cluster to other clusters after borrowing when a cluster has less user or move out of the boundary is another soft feature that contributes to the reported radio resource utilization efficiency. Moreover, the proposed framework provides IoT service providers cost estimation to control their spectrum acquisition to achieve required quality of service (QoS) with guaranteed bit rate (GBR) and non-guaranteed bit rate (Non-GBR)
Collocation Games and Their Application to Distributed Resource Management
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIASâInstituto Colombiano para el Desarrollo de la Ciencia y la TecnologĂa "Francisco JosĂ© de Caldas
Predictive Pre-allocation for Low-latency Uplink Access in Industrial Wireless Networks
Driven by mission-critical applications in modern industrial systems, the 5th
generation (5G) communication system is expected to provide ultra-reliable
low-latency communications (URLLC) services to meet the quality of service
(QoS) demands of industrial applications. However, these stringent requirements
cannot be guaranteed by its conventional dynamic access scheme due to the
complex signaling procedure. A promising solution to reduce the access delay is
the pre-allocation scheme based on the semi-persistent scheduling (SPS)
technique, which however may lead to low spectrum utilization if the allocated
resource blocks (RBs) are not used. In this paper, we aim to address this issue
by developing DPre, a predictive pre-allocation framework for uplink access
scheduling of delay-sensitive applications in industrial process automation.
The basic idea of DPre is to explore and exploit the correlation of data
acquisition and access behavior between nodes through static and dynamic
learning mechanisms in order to make judicious resource per-allocation
decisions. We evaluate the effectiveness of DPre based on several monitoring
applications in a steel rolling production process. Simulation results
demonstrate that DPre achieves better performance in terms of the prediction
accuracy, which can effectively increase the rewards of those reserved
resources.Comment: Full version (accepted by INFOCOM 2018
Resource allocation for massively multiplayer online games using fuzzy linear assignment technique
This paper investigates the possible use of fuzzy system and Linear Assignment Problem (LAP) for resource allocation for Massively Multiplayer Online Games (MMOGs). Due to the limitation of design capacity of such complex MMOGs, resources available in the game cannot be unlimited. Resources in this context refer to items used to support the game play and activities in the MMOGs, also known as in-game resources. As for network resources, it is also one of the important research areas for MMOGs due to the increasing number of players. One of the main objectives is to ensure the Quality of Service (QoS) in the MMOGs environment for each player. Regardless, which context the resource is defined, the proposed method can still be used. Simulated results based on the network resources to ensure QoS shows that the proposed method could be an alternative
Wireless Data Acquisition for Edge Learning: Data-Importance Aware Retransmission
By deploying machine-learning algorithms at the network edge, edge learning
can leverage the enormous real-time data generated by billions of mobile
devices to train AI models, which enable intelligent mobile applications. In
this emerging research area, one key direction is to efficiently utilize radio
resources for wireless data acquisition to minimize the latency of executing a
learning task at an edge server. Along this direction, we consider the specific
problem of retransmission decision in each communication round to ensure both
reliability and quantity of those training data for accelerating model
convergence. To solve the problem, a new retransmission protocol called
data-importance aware automatic-repeat-request (importance ARQ) is proposed.
Unlike the classic ARQ focusing merely on reliability, importance ARQ
selectively retransmits a data sample based on its uncertainty which helps
learning and can be measured using the model under training. Underpinning the
proposed protocol is a derived elegant communication-learning relation between
two corresponding metrics, i.e., signal-to-noise ratio (SNR) and data
uncertainty. This relation facilitates the design of a simple threshold based
policy for importance ARQ. The policy is first derived based on the classic
classifier model of support vector machine (SVM), where the uncertainty of a
data sample is measured by its distance to the decision boundary. The policy is
then extended to the more complex model of convolutional neural networks (CNN)
where data uncertainty is measured by entropy. Extensive experiments have been
conducted for both the SVM and CNN using real datasets with balanced and
imbalanced distributions. Experimental results demonstrate that importance ARQ
effectively copes with channel fading and noise in wireless data acquisition to
achieve faster model convergence than the conventional channel-aware ARQ.Comment: This is an updated version: 1) extension to general classifiers; 2)
consideration of imbalanced classification in the experiments. Submitted to
IEEE Journal for possible publicatio
- âŠ