6 research outputs found

    Micro-billing framework for IoT: Research & Technological foundations

    Get PDF
    In traditional product companies, creating value meant identifying enduring customer needs and manufacturing well-engineered solutions. Two hundred and fifty years after the start of the Industrial Revolution, this pattern of activity plays out every day in a connected world where products are no longer one-and-done. Making money is not anymore limited to physical product sales; other downstream revenue streams become possible (e.g., service-based information, Apps). Nonetheless, it is still challenging to stimulate the IoT market by enabling IoT stakeholders (from organizations to an individual persons) to make money out of the information that surrounds them. Generally speaking, there is a lack of micro-billing frameworks and platforms that enable IoT stakeholders to publish/discover, and potentially sell/buy relevant and useful IoT information items. This paper discusses important aspects that need to be considered when investigating and developing such a framework/platform. A high-level requirement analysis is then carried out to identify key technological and scientific building blocks for laying the foundation of an innovative micro-billing framework named IoTBnB (IoT puBlication aNd Billing)

    Micro-billing framework for IoT: Research & Technological foundations

    Get PDF
    In traditional product companies, creating value meant identifying enduring customer needs and manufacturing well-engineered solutions. Two hundred and fifty years after the start of the Industrial Revolution, this pattern of activity plays out every day in a connected world where products are no longer one-and-done. Making money is not anymore limited to physical product sales; other downstream revenue streams become possible (e.g., service-based information, Apps). Nonetheless, it is still challenging to stimulate the IoT market by enabling IoT stakeholders (from organizations to an individual persons) to make money out of the information that surrounds them. Generally speaking, there is a lack of micro-billing frameworks and platforms that enable IoT stakeholders to publish/discover, and potentially sell/buy relevant and useful IoT information items. This paper discusses important aspects that need to be considered when investigating and developing such a framework/platform. A high-level requirement analysis is then carried out to identify key technological and scientific building blocks for laying the foundation of an innovative micro-billing framework named IoTBnB (IoT puBlication aNd Billing)

    Stochastic Optimization For Multi-Agent Statistical Learning And Control

    Get PDF
    The goal of this thesis is to develop a mathematical framework for optimal, accurate, and affordable complexity statistical learning among networks of autonomous agents. We begin by noting the connection between statistical inference and stochastic programming, and consider extensions of this setup to settings in which a network of agents each observes a local data stream and would like to make decisions that are good with respect to information aggregated across the entire network. There is an open-ended degree of freedom in this problem formulation, however: the selection of the estimator function class which defines the feasible set of the stochastic program. Our central contribution is the design of stochastic optimization tools in reproducing kernel Hilbert spaces that yield optimal, accurate, and affordable complexity statistical learning for a multi-agent network. To obtain this result, we first explore the relative merits and drawbacks of different function class selections. In Part I, we consider multi-agent expected risk minimization this problem setting for the case that each agent seems to learn a common globally optimal generalized linear models (GLMs) by developing a stochastic variant of Arrow-Hurwicz primal-dual method. We establish convergence to the primal-dual optimal pair when either consensus or ``proximity constraints encode the fact that we want all agents\u27 to agree, or nearby agents to make decisions that are close to one another. Empirically, we observe that these convergence results are substantiated but that convergence may not translate into statistical accuracy. More broadly, optimality within a given estimator function class is not the same as one that makes minimal inference errors. The optimality-accuracy tradeoff of GLMs motivates subsequent efforts to learn more sophisticated estimators based upon learned feature encodings of the data that is fed into the statistical model. The specific tool we turn to in Part II is dictionary learning, where we optimize both over regression weights and an encoding of the data, which yields a non-convex problem. We investigate the use of stochastic methods for online task-driven dictionary learning, and obtain promising performance for the task of a ground robot learning to anticipate control uncertainty based on its past experience. Heartened by this implementation, we then consider extensions of this framework for a multi-agent network to each learn globally optimal task-driven dictionaries based on stochastic primal-dual methods. However, it is here the non-convexity of the optimization problem causes problems: stringent conditions on stochastic errors and the duality gap limit the applicability of the convergence guarantees, and impractically small learning rates are required for convergence in practice. Thus, we seek to learn nonlinear statistical models while preserving convexity, which is possible through kernel methods ( Part III). However, the increased descriptive power of nonparametric estimation comes at the cost of infinite complexity. Thus, we develop a stochastic approximation algorithm in reproducing kernel Hilbert spaces (RKHS) that ameliorates this complexity issue while preserving optimality: we combine the functional generalization of stochastic gradient method (FSGD) with greedily constructed low-dimensional subspace projections based on matching pursuit. We establish that the proposed method yields a controllable trade-off between optimality and memory, and yields highly accurate parsimonious statistical models in practice. % Then, we develop a multi-agent extension of this method by proposing a new node-separable penalty function and applying FSGD together with low-dimensional subspace projections. This extension allows a network of autonomous agents to learn a memory-efficient approximation to the globally optimal regression function based only on their local data stream and message passing with neighbors. In practice, we observe agents are able to stably learn highly accurate and memory-efficient nonlinear statistical models from streaming data. From here, we shift focus to a more challenging class of problems, motivated by the fact that true learning is not just revising predictions based upon data but augmenting behavior over time based on temporal incentives. This goal may be described by Markov Decision Processes (MDPs): at each point, an agent is in some state of the world, takes an action and then receives a reward while randomly transitioning to a new state. The goal of the agent is to select the action sequence to maximize its long-term sum of rewards, but determining how to select this action sequence when both the state and action spaces are infinite has eluded researchers for decades. As a precursor to this feat, we consider the problem of policy evaluation in infinite MDPs, in which we seek to determine the long-term sum of rewards when starting in a given state when actions are chosen according to a fixed distribution called a policy. We reformulate this problem as a RKHS-valued compositional stochastic program and we develop a functional extension of stochastic quasi-gradient algorithm operating in tandem with the greedy subspace projections mentioned above. We prove convergence with probability 1 to the Bellman fixed point restricted to this function class, and we observe a state of the art trade off in memory versus Bellman error for the proposed method on the Mountain Car driving task, which bodes well for incorporating policy evaluation into more sophisticated, provably stable reinforcement learning techniques, and in time, developing optimal collaborative multi-agent learning-based control systems

    Evolution of the Secondary Spectrum Market

    Get PDF
    The secondary spectrum market where primaries (license holders) lease the secondaries (unlicensed users) in lieu of the financial remuneration can eliminate the inefficiencies of the static spectrum allocation policy. We redress some of the challenges that have inhibited the wide scale deployment of the secondary spectrum market. We first consider a secondary spectrum market where the primaries quote their prices for their available channels at a single location. The transmission rates offered by the channels of primaries evolve randomly because of the fading and noise. The secondaries decide to buy among the channels based on the transmission rate and the prices. We formulate the problem as a non cooperative game with the primaries as players. Each primary selects a price based on its own channel state only, as it is unaware of the channel states of the other primaries. We show that under the unique NE strategy profile a primary prices its channel to render the channel which provides high transmission rate more preferable; this negates the perception that prices ought to be selected to render channels equally preferable to the secondary regardless of their transmission rates. Next, we consider the setting where the secondary spectrum market operates over multiple locations. Each primary needs to select an independent set in a conflict graph and the price at each location. We consider two scenarios--i) the number of locations is small, and ii) the number of locations is large. We show that when the number of locations is small, in a symmetric NE strategy, each primary sells its channel to an independent set whose cardinality exceeds a certain threshold. The threshold also decreases as the transmission rate offered by the channel decreases. The symmetric NE is unique in a widely seen conflict graph-the linear conflict graph. In contrast, when the number of locations is large, a primary only sells its channel in the maximum independent set and the symmetric NE in not unique in the linear conflict graph. Subsequently, we consider the setting where a primary owns a channel at a single location and can acquire the competitor\u27s channel state information (C-CSI) by incurring a cost. Each primary now needs to decide whether to acquire the C-CSI or not and a price based on the information it has. We formulate the problem as a non cooperative game with two primaries as players and characterize the NE strategies. We first characterize the Nash Equilibrium (NE) of this game for a symmetric model where the C-CSI is perfect. We show that the payoff of a primary is independent of the C-CSI acquisition cost. We then generalize our analysis to allow for imperfect estimation and cases where the two primaries have different C-CSI costs or different channel availabilities. Our results show interestingly that the payoff of a primary increases when there is estimation error. We also show that surprisingly, the expected payoff of a primary may decrease when the C-CSI acquisition cost decreases when primaries have different availabilities. Finally, we consider the setting where a primary allows multiple secondaries use the channel of a primary at a location. The interference must be limited at each primary-user terminal (primary-UT) in order to maintain a quality of service for each primary-UT. The secondary-base stations (secondary-BSs) are self-interested entities and only maximize their own utilities which makes it difficult to obtain a simple interference mitigation policy. We formulate the problem as a non cooperative coupled constrained concave game. We use the concept of the normalized Nash equilibrium (NNE) since it caters to the distributed setting. We develop a distributed algorithm which converges to the unique NNE for a large class of utility functions. In the distributed algorithm, the secondary-BSs do not need to exchange information among themselves, and the minimal cooperation from the primary-UTs. When the NNE is not unique or difficult to compute, we introduce the concept of WNNE which retains most of the properties of the NNE, but it can be computed easily compared to the NNE
    corecore