139 research outputs found
A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm
Computing problems that handle large amounts of data necessitate the use of
lossless data compression for efficient storage and transmission. We present a
novel lossless universal data compression algorithm that uses parallel
computational units to increase the throughput. The length- input sequence
is partitioned into blocks. Processing each block independently of the
other blocks can accelerate the computation by a factor of , but degrades
the compression quality. Instead, our approach is to first estimate the minimum
description length (MDL) context tree source underlying the entire input, and
then encode each of the blocks in parallel based on the MDL source. With
this two-pass approach, the compression loss incurred by using more parallel
units is insignificant. Our algorithm is work-efficient, i.e., its
computational complexity is . Its redundancy is approximately
bits above Rissanen's lower bound on universal compression
performance, with respect to any context tree source whose maximal depth is at
most . We improve the compression by using different quantizers for
states of the context tree based on the number of symbols corresponding to
those states. Numerical results from a prototype implementation suggest that
our algorithm offers a better trade-off between compression and throughput than
competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special
issue on Signal Processing for Big Data (expected publication date June
2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note:
substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a
typ
A Parallel Two-Pass MDL Context Tree Algorithm for Universal Source Coding
We present a novel lossless universal source coding algorithm that uses
parallel computational units to increase the throughput. The length- input
sequence is partitioned into blocks. Processing each block independently of
the other blocks can accelerate the computation by a factor of , but
degrades the compression quality. Instead, our approach is to first estimate
the minimum description length (MDL) source underlying the entire input, and
then encode each of the blocks in parallel based on the MDL source. With
this two-pass approach, the compression loss incurred by using more parallel
units is insignificant. Our algorithm is work-efficient, i.e., its
computational complexity is . Its redundancy is approximately
bits above Rissanen's lower bound on universal coding performance,
with respect to any tree source whose maximal depth is at most
Empirical Bayes and Full Bayes for Signal Estimation
We consider signals that follow a parametric distribution where the parameter
values are unknown. To estimate such signals from noisy measurements in scalar
channels, we study the empirical performance of an empirical Bayes (EB)
approach and a full Bayes (FB) approach. We then apply EB and FB to solve
compressed sensing (CS) signal estimation problems by successively denoising a
scalar Gaussian channel within an approximate message passing (AMP) framework.
Our numerical results show that FB achieves better performance than EB in
scalar channel denoising problems when the signal dimension is small. In the CS
setting, the signal dimension must be large enough for AMP to work well; for
large signal dimensions, AMP has similar performance with FB and EB.Comment: This work was presented at the Information Theory and Application
workshop (ITA), San Diego, CA, Feb. 201
Single-stage revision in the management of prosthetic joint infections after total knee arthroplasty - A review of current concepts
© 2024 The AuthorsProsthetic joint infection (PJI) is a devastating complication following total knee arthroplasty (TKA); and the gold standard surgical approach involves a two-staged, revision TKA (TSR). Owing to the newer, emerging evidence on this subject, there has been gradual shift towards a single-stage revision approach (SSR), with the purported benefits of mitigated patient morbidity, decreased complications and reduced costs. However, there is still substantial lacuna in the evidence regarding the safety and outcome of the two approaches in chronic PJI. This study aimed to comprehensively review of the literature on SSR; and evaluate its role within Revision TKA post PJI. The narrative review involved a comprehensive search of the databases (Embase, Medline and Pubmed), conducted on 20th of January 2024 using specific key words. All the manuscripts discussing the use of SSR for the management of PJI after TKA were considered for the review. Among the screened manuscripts, opinion articles, letters to the editor and non-English manuscripts were excluded. The literature search yielded a total 232 studies. Following a detailed scrutiny of these manuscripts, 26 articles were finally selected. The overall success rate following SSR is reported to range from 73 % to 100 % (and is comparable to TSR). SSR is performed in PJI patients with bacteriologically-proven infection, adequate soft tissue cover, immuno-competent host and excellent tolerance to antibiotics. The main difference between SSR and TSR is that the interval between the 2 stages is only a few minutes instead of 6 weeks. Appropriate topical, intraoperative antibiotic therapy, followed by adequate postoperative systemic antibiotic cover are necessary to ascertain good outcome. Some of the major benefits of SSR over TSR include reduced morbidity, decreased complications (such as arthrofibrosis or anesthesia-associated adverse events), meliorated extremity function, earlier return to activities, mitigated mechanical (prosthesis-associated) complications and enhanced patient satisfaction. SSR is a reliable approach for the management of chronic PJI. Based on our comprehensive review of the literature, it may be concluded that the right selection of patients, extensive debridement, sophisticated reconstruction strategy, identification of the pathogenic organism, initiation of appropriate antibiotic therapy and ensuring adequate follow-up are the key determinants of successful outcome. To achieve this will undoubtedly require an MDT approach to be taken on a case-by-case basis. [Abstract copyright: © 2024 The Authors.]Unfunde
A Study on the Impact of Locality in the Decoding of Binary Cyclic Codes
In this paper, we study the impact of locality on the decoding of binary
cyclic codes under two approaches, namely ordered statistics decoding (OSD) and
trellis decoding. Given a binary cyclic code having locality or availability,
we suitably modify the OSD to obtain gains in terms of the Signal-To-Noise
ratio, for a given reliability and essentially the same level of decoder
complexity. With regard to trellis decoding, we show that careful introduction
of locality results in the creation of cyclic subcodes having lower maximum
state complexity. We also present a simple upper-bounding technique on the
state complexity profile, based on the zeros of the code. Finally, it is shown
how the decoding speed can be significantly increased in the presence of
locality, in the moderate-to-high SNR regime, by making use of a quick-look
decoder that often returns the ML codeword.Comment: Extended version of a paper submitted to ISIT 201
Rate-Optimal Streaming Codes for Channels with Burst and Isolated Erasures
Recovery of data packets from packet erasures in a timely manner is critical
for many streaming applications. An early paper by Martinian and Sundberg
introduced a framework for streaming codes and designed rate-optimal codes that
permit delay-constrained recovery from an erasure burst of length up to . A
recent work by Badr et al. extended this result and introduced a sliding-window
channel model . Under this model, in a sliding-window of
width , one of the following erasure patterns are possible (i) a burst of
length at most or (ii) at most (possibly non-contiguous) arbitrary
erasures. Badr et al. obtained a rate upper bound for streaming codes that can
recover with a time delay , from any erasure patterns permissible under the
model. However, constructions matching the bound were
absent, except for a few parameter sets. In this paper, we present an explicit
family of codes that achieves the rate upper bound for all feasible parameters
, , and .Comment: shorter version submitted to ISIT 201
Recommended from our members
Foraging in Stochastic Environments
For many organisms, foraging for food and resources is integral to survival. Mathematical models of foraging can provide insight into the benefits and drawbacks of different foraging strategies. We begin by considering the movement of a memoryless starving forager on a one-dimensional periodic lattice, where each location contains one unit of food. As the forager lands on sites with food, it consumes the food leaving the sites empty. If the forager lands consecutively on a certain number of empty sites, then it starves. The forager has two modes of movement: it can either diffuse by moving with equal probability to adjacent lattice sites, or it can jump uniformly randomly amongst the lattice sites. The lifetime of the forager can be approximated in either paradigm by the sum of the cover time plus the number of empty sites it can visit before starving. The lifetime of the forager varies nonmontonically according to the probability of jumping. The tradeoff between jumps and diffusion is explored using simpler systems as well as numerical simulation, and we demonstrate that the best strategy is one that incorporates both jumps and diffusion. When long range jumps are time-penalized, counterintuitively, this shifts the optimal strategy to pure jumping. We next consider optimal strategies for a group of foragers to search for a target (such as food in an environment where it is sparsely located). There is a single target in one of several patches, with a greater penalty if the foragers decide to switch their positions among the patches. Both in the case of a single searcher, and in the case of a group of searchers, efficient deterministic strategies can be found to locate the target
- …