23,800 research outputs found
Higher-order CIS codes
We introduce {\bf complementary information set codes} of higher-order. A
binary linear code of length and dimension is called a complementary
information set code of order (-CIS code for short) if it has
pairwise disjoint information sets. The duals of such codes permit to reduce
the cost of masking cryptographic algorithms against side-channel attacks. As
in the case of codes for error correction, given the length and the dimension
of a -CIS code, we look for the highest possible minimum distance. In this
paper, this new class of codes is investigated. The existence of good long CIS
codes of order is derived by a counting argument. General constructions
based on cyclic and quasi-cyclic codes and on the building up construction are
given. A formula similar to a mass formula is given. A classification of 3-CIS
codes of length is given. Nonlinear codes better than linear codes are
derived by taking binary images of -codes. A general algorithm based on
Edmonds' basis packing algorithm from matroid theory is developed with the
following property: given a binary linear code of rate it either provides
disjoint information sets or proves that the code is not -CIS. Using
this algorithm, all optimal or best known codes where and are shown to be -CIS for all
such and , except for with and with .Comment: 13 pages; 1 figur
A linear lower bound for incrementing a space-optimal integer representation in the bit-probe model
We present the first linear lower bound for the number of bits required to be
accessed in the worst case to increment an integer in an arbitrary space-
optimal binary representation. The best previously known lower bound was
logarithmic. It is known that a logarithmic number of read bits in the worst
case is enough to increment some of the integer representations that use one
bit of redundancy, therefore we show an exponential gap between space-optimal
and redundant counters.
Our proof is based on considering the increment procedure for a space optimal
counter as a permutation and calculating its parity. For every space optimal
counter, the permutation must be odd, and implementing an odd permutation
requires reading at least half the bits in the worst case. The combination of
these two observations explains why the worst-case space-optimal problem is
substantially different from both average-case approach with constant expected
number of reads and almost space optimal representations with logarithmic
number of reads in the worst case.Comment: 12 pages, 4 figure
Space-Optimal Quasi-Gray Codes with Logarithmic Read Complexity
A quasi-Gray code of dimension n and length l over an alphabet Sigma is a sequence of distinct words w_1,w_2,...,w_l from Sigma^n such that any two consecutive words differ in at most c coordinates, for some fixed constant c>0. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word w_i into its successor w_{i+1}.
We present construction of quasi-Gray codes of dimension n and length 3^n over the ternary alphabet {0,1,2} with worst-case read complexity O(log n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension n and length at least 2^n - 20n with worst-case read complexity 6+log n and write complexity 2. This complements a recent result by Raskin [Raskin \u2717] who shows that any quasi-Gray code over binary alphabet of length 2^n has read complexity Omega(n).
Our results significantly improve on previously known constructions and for the odd-size alphabets we break the Omega(n) worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. \u2714, Ben-Or and Cleve \u2792, Barrington \u2789, Coppersmith and Grossman \u2775]
Towards the Efficient Generation of Gray Codes in the Bitprobe Model
We examine the problem of representing integers modulo L so that both increment and decrement operations can be performed efficiently. This problem is studied in the bitprobe model, where the complexity of the underlying problem is measured by the number of bit operations performed on the data structure. In this thesis, we will primarily be interested in constructing space-optimal data structures. That is, we would like to use exactly n bits to represent integers modulo 2^n. Brodal et al. gave such a data structure, which requires n-1 bit reads and 3 bit writes, in the worst case, to perform increment and decrement operations We provide several improvements to their data structure. First, we give a data structure that requires n-1 bit reads and 2 bit writes, in the worst case, to perform increment and decrement operations. Then, we refine this result to obtain a data structure that requires n-1 bit reads and a single bit write to perform both operations. This disproves the conjecture that, when a space-optimal data structure uses only 1 bit write to perform these operations, then every bit in the data structure must be inspected in the worst case
petitRADTRANS: a Python radiative transfer package for exoplanet characterization and retrieval
We present the easy-to-use, publicly available, Python package petitRADTRANS,
built for the spectral characterization of exoplanet atmospheres. The code is
fast, accurate, and versatile; it can calculate both transmission and emission
spectra within a few seconds at low resolution ( = 1000;
correlated-k method) and high resolution (;
line-by-line method), using only a few lines of input instruction. The somewhat
slower correlated-k method is used at low resolution because it is more
accurate than methods such as opacity sampling. Clouds can be included and
treated using wavelength-dependent power law opacities, or by using optical
constants of real condensates, specifying either the cloud particle size, or
the atmospheric mixing and particle settling strength. Opacities of amorphous
or crystalline, spherical or irregularly-shaped cloud particles are available.
The line opacity database spans temperatures between 80 and 3000 K, allowing to
model fluxes of objects such as terrestrial planets, super-Earths, Neptunes, or
hot Jupiters, if their atmospheres are hydrogen-dominated. Higher temperature
points and species will be added in the future, allowing to also model the
class of ultra hot-Jupiters, with equilibrium temperatures K. Radiative transfer results were tested by cross-verifying the low- and
high-resolution implementation of petitRADTRANS, and benchmarked with the
petitCODE, which itself is also benchmarked to the ATMO and Exo-REM codes. We
successfully carried out test retrievals of synthetic JWST emission and
transmission spectra (for the hot Jupiter TrES-4b, which has a of
1800 K). The code is publicly available at
http://gitlab.com/mauricemolli/petitRADTRANS, and its documentation can be
found at https://petitradtrans.readthedocs.io.Comment: 17 pages, 7 figures, published in A&
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Confidence limits of evolutionary synthesis models. IV Moving forward to a probabilistic formulation
Synthesis models predict the integrated properties of stellar populations.
Several problems exist in this field, mostly related to the fact that
integrated properties are distributed. To date, this aspect has been either
ignored (as in standard synthesis models, which are inherently deterministic)
or interpreted phenomenologically (as in Monte Carlo simulations, which
describe distributed properties rather than explain them). We approach
population synthesis as a problem in probability theory, in which stellar
luminosities are random variables extracted from the stellar luminosity
distribution function (sLDF). We derive the population LDF (pLDF) for clusters
of any size from the sLDF, obtaining the scale relations that link the sLDF to
the pLDF. We recover the predictions of standard synthesis models, which are
shown to compute the mean of the sLDF. We provide diagnostic diagrams and a
simplified recipe for testing the statistical richness of observed clusters,
thereby assessing whether standard synthesis models can be safely used or a
statistical treatment is mandatory. We also recover the predictions of Monte
Carlo simulations, with the additional bonus of being able to interpret them in
mathematical and physical terms. We give examples of problems that can be
addressed through our probabilistic formalism. Though still under development,
ours is a powerful approach to population synthesis. In an era of resolved
observations and pipelined analyses of large surveys, this paper is offered as
a signpost in the field of stellar populations.Comment: Accepted by A&A. Substantially modified with respect to the 1st
draft. 26 pages, 14 fig
- …