7 research outputs found

    Új módszerek az adattömörítésben = New methods in data compression

    Get PDF
    Univerzális, kis késleltetésű kódokat terveztünk individuális sorozatok veszteséges tömörítésére, melyek ugyanolyan jó teljesítményt nyújtanak, mint a sorozathoz illesztett legjobb időben változó kód egy referenciaosztályból, mely az alkalmazott kódolási eljárást időről időre változtathatja. Hatékony, kis komplexitású implementációt készítettünk arra az esetre, amikor az alap-referenciaosztály a hagyományos vagy bizonyos hálózati skalárkvantálók osztálya. Új útvonalválasztási módszereket dolgoztunk ki kommunikációs hálózatokra, melyek aszimptotikusan ugyanolyan jó QoS (csomagvesztési arány, késleltetés) eredményt adnak, mint a változó hálózati környezethez (utólag) illesztett legjobb út. Kiemelendő, hogy a módszer teljesítménye és komplexitása időben optimális konvergenciasebesség mellett a hálózat méretével (és nem az utak számával) skálázik. Kísérletek szerint az elterjedt standard bájt-alapú tömörítő algoritmusok rosszul teljesítenek, ha a forrás nem bájt-alapú, ugyanakkor a bit-alapú módszerek jól működnek bájt-alapú forrásokra is (továbbá komplexitásuk - az alkalmazott kisebb ábécé miatt - gyakran lényegesen kisebb). Ezt a megfigyelést elméletileg is igazoltuk, megvizsgálva, hogy hogyan közelíthetőek blokk-Markov-források magasabb rendű szimbólum-alapú Markov-modellek segítségével. Megoldottuk a ládapakolási probléma egy szekvenciális, on-line változatát, mely alkalmazható bizonyos, kevés erőforrással rendelkező szenzorok hatékony adásütemezésére. | We designed limited-delay data compression methods that perform asymptotically as well as the best time-varying code from a reference family (matched to the source sequence in hindsight) that can change the employed base code several times. We provided efficient, low-complexity solutions for the cases when the base reference class is the set of traditional or certain network scalar quantizers. We developed routing algorithms for communication networks that can provide asymptotically as good QoS parameters (such as packet loss ratio or delay) as the best fixed path in the network matched to the varying conditions in hindsight. The performance and complexity of the developed methods scale with the size of the network (instead of with the number of paths) even when the rate of convergence (in time) is optimal. Experiments indicate that data for which bytes are not the natural choice of symbols compress poorly using standard byte-based implementations of lossless data compression algorithms, while algorithms working on a bit level perform reasonably on byte-based data (in addition to having computational advantages resulting from operating on a small alphabet). We explained this phenomenon by analyzing how block Markov sources can be approximated with symbol-based higher order Markov sources. We provided a solution to a sequential on-line version of the bin packing problem, which can be applied to schedule transmissions for certain sensors with limited resources

    Online Multi-task Learning with Hard Constraints

    Get PDF
    We discuss multi-task online learning when a decision maker has to deal simultaneously with M tasks. The tasks are related, which is modeled by imposing that the M-tuple of actions taken by the decision maker needs to satisfy certain constraints. We give natural examples of such restrictions and then discuss a general class of tractable constraints, for which we introduce computationally efficient ways of selecting actions, essentially by reducing to an on-line shortest path problem. We briefly discuss "tracking" and "bandit" versions of the problem and extend the model in various ways, including non-additive global losses and uncountably infinite sets of tasks

    The on-line shortest path problem under partial monitoring

    Get PDF
    The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on the number of edges of the graph. The algorithm can be implemented with linear complexity in the number of rounds n and in the number of edges. An extension to the so-called label efficient setting is also given, in which the decision maker is informed about the weights of the edges corresponding to the chosen path at a total of m << n time instances. Another extension is shown where the decision maker competes against a time-varying path, a generalization of the problem of tracking the best expert. A version of the multi-armed bandit setting for shortest path is also discussed where the decision maker learns only the total weight of the chosen path but not the weights of the individual edges on the path. Applications to routing in packet switched networks along with simulation results are also presented.Comment: 35 page

    Discrete Denoising with Shifts

    Full text link
    We introduce S-DUDE, a new algorithm for denoising DMC-corrupted data. The algorithm, which generalizes the recently introduced DUDE (Discrete Universal DEnoiser) of Weissman et al., aims to compete with a genie that has access, in addition to the noisy data, also to the underlying clean data, and can choose to switch, up to mm times, between sliding window denoisers in a way that minimizes the overall loss. When the underlying data form an individual sequence, we show that the S-DUDE performs essentially as well as this genie, provided that mm is sub-linear in the size of the data. When the clean data is emitted by a piecewise stationary process, we show that the S-DUDE achieves the optimum distribution-dependent performance, provided that the same sub-linearity condition is imposed on the number of switches. To further substantiate the universal optimality of the S-DUDE, we show that when the number of switches is allowed to grow linearly with the size of the data, \emph{any} (sequence of) scheme(s) fails to compete in the above senses. Using dynamic programming, we derive an efficient implementation of the S-DUDE, which has complexity (time and memory) growing only linearly with the data size and the number of switches mm. Preliminary experimental results are presented, suggesting that S-DUDE has the capacity to significantly improve on the performance attained by the original DUDE in applications where the nature of the data abruptly changes in time (or space), as is often the case in practice.Comment: 30 pages, 3 figures, submitted to IEEE Trans. Inform. Theor

    Adaptive Routing Using Expert Advice

    Full text link

    Efficient algorithms and minimax bounds for zero-delay lossy source coding

    No full text
    corecore