17,240 research outputs found
Tests of fit for the logarithmic distribution
Smooth tests for the logarithmic distribution are compared with three tests: the first is a test due to Epps and is based on a probability generating function, the second is the Anderson-Darling test, and the third is due to Klar and is based on the empirical integrated distribution function. These tests all have substantially better power than the traditional Pearson-Fisher X2 test of fit for the logarithmic. These traditional chi-squared tests are the only logarithmic tests of fit commonly applied by ecologists and other scientists
Tracking Information Flow through the Environment: Simple Cases of Stigmerg
Recent work in sensor evolution aims at studying the perception-action loop in a formalized information-theoretic manner. By treating sensors as extracting information and actuators as having the capability to "imprint" information on the environment we can view agents as creating, maintaining and making use of various information flows. In our paper we study the perception-action loop of agents using Shannon information flows. We use information theory to track and reveal the important relationships between agents and their environment. For example, we provide an information-theoretic characterization of stigmergy and evolve finite-state automata as agent controllers to engage in stigmergic communication. Our analysis of the evolved automata and the information flow provides insight into how evolution organizes sensoric information acquisition, implicit internal and external memory, processing and action selection
An Opportunistic Error Correction Layer for OFDM Systems
In this paper, we propose a novel cross layer scheme to lower power\ud
consumption of ADCs in OFDM systems, which is based on resolution\ud
adaptive ADCs and Fountain codes. The key part in the new proposed\ud
system is that the dynamic range of ADCs can be reduced by\ud
discarding the packets which are transmitted over 'bad' sub\ud
carriers. Correspondingly, the power consumption in ADCs can be\ud
reduced. Also, the new system does not process all the packets but\ud
only processes surviving packets. This new error correction layer\ud
does not require perfect channel knowledge, so it can be used in a\ud
realistic system where the channel is estimated. With this new\ud
approach, more than 70% of the energy consumption in the ADC can be\ud
saved compared with the conventional IEEE 802.11a WLAN system under\ud
the same channel conditions and throughput. The ADC in a receiver\ud
can consume up to 50% of the total baseband energy. Moreover, to\ud
reduce the overhead of Fountain codes, we apply message passing and\ud
Gaussian elimination in the decoder. In this way, the overhead is\ud
3% for a small block size (i.e. 500 packets). Using both methods\ud
results in an efficient system with low delay
A guided tour of asynchronous cellular automata
Research on asynchronous cellular automata has received a great amount of
attention these last years and has turned to a thriving field. We survey the
recent research that has been carried out on this topic and present a wide
state of the art where computing and modelling issues are both represented.Comment: To appear in the Journal of Cellular Automat
Deep Reinforcement Learning for Resource Management in Network Slicing
Network slicing is born as an emerging business to operators, by allowing
them to sell the customized slices to various tenants at different prices. In
order to provide better-performing and cost-efficient services, network slicing
involves challenging technical issues and urgently looks forward to intelligent
innovations to make the resource management consistent with users' activities
per slice. In that regard, deep reinforcement learning (DRL), which focuses on
how to interact with the environment by trying alternative actions and
reinforcing the tendency actions producing more rewarding consequences, is
assumed to be a promising solution. In this paper, after briefly reviewing the
fundamental concepts of DRL, we investigate the application of DRL in solving
some typical resource management for network slicing scenarios, which include
radio resource slicing and priority-based core network slicing, and demonstrate
the advantage of DRL over several competing schemes through extensive
simulations. Finally, we also discuss the possible challenges to apply DRL in
network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
Facticity as the amount of self-descriptive information in a data set
Using the theory of Kolmogorov complexity the notion of facticity {\phi}(x)
of a string is defined as the amount of self-descriptive information it
contains. It is proved that (under reasonable assumptions: the existence of an
empty machine and the availability of a faithful index) facticity is definite,
i.e. random strings have facticity 0 and for compressible strings 0 < {\phi}(x)
< 1/2 |x| + O(1). Consequently facticity measures the tension in a data set
between structural and ad-hoc information objectively. For binary strings there
is a so-called facticity threshold that is dependent on their entropy. Strings
with facticty above this threshold have no optimal stochastic model and are
essentially computational. The shape of the facticty versus entropy plot
coincides with the well-known sawtooth curves observed in complex systems. The
notion of factic processes is discussed. This approach overcomes problems with
earlier proposals to use two-part code to define the meaningfulness or
usefulness of a data set.Comment: 10 pages, 2 figure
Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure
Scenario generation is the construction of a discrete random vector to
represent parameters of uncertain values in a stochastic program. Most
approaches to scenario generation are distribution-driven, that is, they
attempt to construct a random vector which captures well in a probabilistic
sense the uncertainty. On the other hand, a problem-driven approach may be able
to exploit the structure of a problem to provide a more concise representation
of the uncertainty.
In this paper we propose an analytic approach to problem-driven scenario
generation. This approach applies to stochastic programs where a tail risk
measure, such as conditional value-at-risk, is applied to a loss function.
Since tail risk measures only depend on the upper tail of a distribution,
standard methods of scenario generation, which typically spread their scenarios
evenly across the support of the random vector, struggle to adequately
represent tail risk. Our scenario generation approach works by targeting the
construction of scenarios in areas of the distribution corresponding to the
tails of the loss distributions. We provide conditions under which our approach
is consistent with sampling, and as proof-of-concept demonstrate how our
approach could be applied to two classes of problem, namely network design and
portfolio selection. Numerical tests on the portfolio selection problem
demonstrate that our approach yields better and more stable solutions compared
to standard Monte Carlo sampling
- …