65 research outputs found

    Új módszerek az adattömörítésben = New methods in data compression

    Get PDF
    Univerzális, kis késleltetésű kódokat terveztünk individuális sorozatok veszteséges tömörítésére, melyek ugyanolyan jó teljesítményt nyújtanak, mint a sorozathoz illesztett legjobb időben változó kód egy referenciaosztályból, mely az alkalmazott kódolási eljárást időről időre változtathatja. Hatékony, kis komplexitású implementációt készítettünk arra az esetre, amikor az alap-referenciaosztály a hagyományos vagy bizonyos hálózati skalárkvantálók osztálya. Új útvonalválasztási módszereket dolgoztunk ki kommunikációs hálózatokra, melyek aszimptotikusan ugyanolyan jó QoS (csomagvesztési arány, késleltetés) eredményt adnak, mint a változó hálózati környezethez (utólag) illesztett legjobb út. Kiemelendő, hogy a módszer teljesítménye és komplexitása időben optimális konvergenciasebesség mellett a hálózat méretével (és nem az utak számával) skálázik. Kísérletek szerint az elterjedt standard bájt-alapú tömörítő algoritmusok rosszul teljesítenek, ha a forrás nem bájt-alapú, ugyanakkor a bit-alapú módszerek jól működnek bájt-alapú forrásokra is (továbbá komplexitásuk - az alkalmazott kisebb ábécé miatt - gyakran lényegesen kisebb). Ezt a megfigyelést elméletileg is igazoltuk, megvizsgálva, hogy hogyan közelíthetőek blokk-Markov-források magasabb rendű szimbólum-alapú Markov-modellek segítségével. Megoldottuk a ládapakolási probléma egy szekvenciális, on-line változatát, mely alkalmazható bizonyos, kevés erőforrással rendelkező szenzorok hatékony adásütemezésére. | We designed limited-delay data compression methods that perform asymptotically as well as the best time-varying code from a reference family (matched to the source sequence in hindsight) that can change the employed base code several times. We provided efficient, low-complexity solutions for the cases when the base reference class is the set of traditional or certain network scalar quantizers. We developed routing algorithms for communication networks that can provide asymptotically as good QoS parameters (such as packet loss ratio or delay) as the best fixed path in the network matched to the varying conditions in hindsight. The performance and complexity of the developed methods scale with the size of the network (instead of with the number of paths) even when the rate of convergence (in time) is optimal. Experiments indicate that data for which bytes are not the natural choice of symbols compress poorly using standard byte-based implementations of lossless data compression algorithms, while algorithms working on a bit level perform reasonably on byte-based data (in addition to having computational advantages resulting from operating on a small alphabet). We explained this phenomenon by analyzing how block Markov sources can be approximated with symbol-based higher order Markov sources. We provided a solution to a sequential on-line version of the bin packing problem, which can be applied to schedule transmissions for certain sensors with limited resources

    Distributed Learning in Wireless Sensor Networks

    Full text link
    The problem of distributed or decentralized detection and estimation in applications such as wireless sensor networks has often been considered in the framework of parametric models, in which strong assumptions are made about a statistical description of nature. In certain applications, such assumptions are warranted and systems designed from these models show promise. However, in other scenarios, prior knowledge is at best vague and translating such knowledge into a statistical model is undesirable. Applications such as these pave the way for a nonparametric study of distributed detection and estimation. In this paper, we review recent work of the authors in which some elementary models for distributed learning are considered. These models are in the spirit of classical work in nonparametric statistics and are applicable to wireless sensor networks.Comment: Published in the Proceedings of the 42nd Annual Allerton Conference on Communication, Control and Computing, University of Illinois, 200

    Consistency in Models for Distributed Learning under Communication Constraints

    Full text link
    Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process amongst members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble. The answers to this question present new issues to consider with regard to universal consistency. Insofar as these models present a useful picture of distributed scenarios, this paper addresses the issue of whether or not the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.Comment: To appear in the IEEE Transactions on Information Theor

    The on-line shortest path problem under partial monitoring

    Get PDF
    The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on the number of edges of the graph. The algorithm can be implemented with linear complexity in the number of rounds n and in the number of edges. An extension to the so-called label efficient setting is also given, in which the decision maker is informed about the weights of the edges corresponding to the chosen path at a total of m << n time instances. Another extension is shown where the decision maker competes against a time-varying path, a generalization of the problem of tracking the best expert. A version of the multi-armed bandit setting for shortest path is also discussed where the decision maker learns only the total weight of the chosen path but not the weights of the individual edges on the path. Applications to routing in packet switched networks along with simulation results are also presented.Comment: 35 page

    Efficient Learning of Sparse Conditional Random Fields for Supervised Sequence Labelling

    Full text link
    Conditional Random Fields (CRFs) constitute a popular and efficient approach for supervised sequence labelling. CRFs can cope with large description spaces and can integrate some form of structural dependency between labels. In this contribution, we address the issue of efficient feature selection for CRFs based on imposing sparsity through an L1 penalty. We first show how sparsity of the parameter set can be exploited to significantly speed up training and labelling. We then introduce coordinate descent parameter update schemes for CRFs with L1 regularization. We finally provide some empirical comparisons of the proposed approach with state-of-the-art CRF training strategies. In particular, it is shown that the proposed approach is able to take profit of the sparsity to speed up processing and hence potentially handle larger dimensional models

    A cost-sensitive decision tree learning algorithm based on a multi-armed bandit framework

    Get PDF
    This paper develops a new algorithm for inducing cost-sensitive decision trees that is inspired by the multi-armed bandit problem, in which a player in a casino has to decide which slot machine (bandit) from a selection of slot machines is likely to pay out the most. Game Theory proposes a solution to this multi-armed bandit problem by using a process of exploration and exploitation in which reward is maximized. This paper utilizes these concepts to develop a new algorithm by viewing the rewards as a reduction in costs, and utilizing the exploration and exploitation techniques so that a compromise between decisions based on accuracy and decisions based on costs can be found. The algorithm employs the notion of lever pulls in the multi-armed bandit game to select the attributes during decision tree induction, using a look-ahead methodology to explore potential attributes and exploit the attributes which maximizes the reward. The new algorithm is evaluated on fifteen datasets and compared to six well-known algorithms J48, EG2, MetaCost, AdaCostM1, ICET and ACT. The results obtained show that the new multi-armed based algorithm can produce more cost-effective trees without compromising accuracy. The paper also includes a critical appraisal of the limitations of the new algorithm and proposes avenues for further research

    Pure exploration in multi-armed bandits with low rank structure using oblivious sampler

    Full text link
    In this paper, we consider the low rank structure of the reward sequence of the pure exploration problems. Firstly, we propose the separated setting in pure exploration problem, where the exploration strategy cannot receive the feedback of its explorations. Due to this separation, it requires that the exploration strategy to sample the arms obliviously. By involving the kernel information of the reward vectors, we provide efficient algorithms for both time-varying and fixed cases with regret bound O(d(lnN)/n)O(d\sqrt{(\ln N)/n}). Then, we show the lower bound to the pure exploration in multi-armed bandits with low rank sequence. There is an O(lnN)O(\sqrt{\ln N}) gap between our upper bound and the lower bound.Comment: 15 page

    Modeling Driver Behavior From Demonstrations in Dynamic Environments Using Spatiotemporal Lattices

    Get PDF
    International audienceOne of the most challenging tasks in the development of path planners for intelligent vehicles is the design of the cost function that models the desired behavior of the vehicle. While this task has been traditionally accomplished by hand-tuning the model parameters, recent approaches propose to learn the model automatically from demonstrated driving data using Inverse Reinforcement Learning (IRL). To determine if the model has correctly captured the demonstrated behavior, most IRL methods require obtaining a policy by solving the forward control problem repetitively. Calculating the full policy is a costly task in continuous or large domains and thus often approximated by finding a single trajectory using traditional path-planning techniques. In this work, we propose to find such a trajectory using a conformal spatiotemporal state lattice, which offers two main advantages. First, by conforming the lattice to the environment, the search is focused only on feasible motions for the robot, saving computational power. And second, by considering time as part of the state, the trajectory is optimized with respect to the motion of the dynamic obstacles in the scene. As a consequence, the resulting trajectory can be used for the model assessment. We show how the proposed IRL framework can successfully handle highly dynamic environments by modeling the highway tactical driving task from demonstrated driving data gathered with an instrumented vehicle
    corecore