71,284 research outputs found
A Computational Study Of The Role Of Spatial Receptive Field Structure In Processing Natural And Non-Natural Scenes
The center-surround receptive field structure, ubiquitous in the visual system, is hypothesized to be evolutionarily advantageous in image processing tasks. We address the potential functional benefits and shortcomings of spatial localization and center-surround antagonism in the context of an integrate-and-fire neuronal network model with image-based forcing. Utilizing the sparsity of natural scenes, we derive a compressive-sensing framework for input image reconstruction utilizing evoked neuronal firing rates. We investigate how the accuracy of input encoding depends on the receptive field architecture, and demonstrate that spatial localization in visual stimulus sampling facilitates marked improvements in natural scene processing beyond uniformly-random excitatory connectivity. However, for specific classes of images, we show that spatial localization inherent in physiological receptive fields combined with information loss through nonlinear neuronal network dynamics may underlie common optical illusions, giving a novel explanation for their manifestation. In the context of signal processing, we expect this work may suggest new sampling protocols useful for extending conventional compressive sensing theory
Improved Compressive Sensing Of Natural Scenes Using Localized Random Sampling
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging
Statistical Searches for Microlensing Events in Large, Non-Uniformly Sampled Time-Domain Surveys: A Test Using Palomar Transient Factory Data
Many photometric time-domain surveys are driven by specific goals, such as
searches for supernovae or transiting exoplanets, which set the cadence with
which fields are re-imaged. In the case of the Palomar Transient Factory (PTF),
several sub-surveys are conducted in parallel, leading to non-uniform sampling
over its footprint. While the median PTF field has been imaged 40 times in \textit{R}-band,
have been observed 100 times. We use PTF data to
study the trade-off between searching for microlensing events in a survey whose
footprint is much larger than that of typical microlensing searches, but with
far-from-optimal time sampling. To examine the probability that microlensing
events can be recovered in these data, we test statistics used on uniformly
sampled data to identify variables and transients. We find that the von Neumann
ratio performs best for identifying simulated microlensing events in our data.
We develop a selection method using this statistic and apply it to data from
fields with 10 -band observations, light curves,
uncovering three candidate microlensing events. We lack simultaneous,
multi-color photometry to confirm these as microlensing events. However, their
number is consistent with predictions for the event rate in the PTF footprint
over the survey's three years of operations, as estimated from near-field
microlensing models. This work can help constrain all-sky event rate
predictions and tests microlensing signal recovery in large data sets, which
will be useful to future time-domain surveys, such as that planned with the
Large Synoptic Survey Telescope.Comment: 13 pages, 14 figures; accepted for publication in ApJ. fixed author
lis
High-resolution distributed sampling of bandlimited fields with low-precision sensors
The problem of sampling a discrete-time sequence of spatially bandlimited
fields with a bounded dynamic range, in a distributed,
communication-constrained, processing environment is addressed. A central unit,
having access to the data gathered by a dense network of fixed-precision
sensors, operating under stringent inter-node communication constraints, is
required to reconstruct the field snapshots to maximum accuracy. Both
deterministic and stochastic field models are considered. For stochastic
fields, results are established in the almost-sure sense. The feasibility of
having a flexible tradeoff between the oversampling rate (sensor density) and
the analog-to-digital converter (ADC) precision, while achieving an exponential
accuracy in the number of bits per Nyquist-interval per snapshot is
demonstrated. This exposes an underlying ``conservation of bits'' principle:
the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed
along the amplitude axis (sensor-precision) and space (sensor density) in an
almost arbitrary discrete-valued manner, while retaining the same (exponential)
distortion-rate characteristics. Achievable information scaling laws for field
reconstruction over a bounded region are also derived: With N one-bit sensors
per Nyquist-interval, Nyquist-intervals, and total network
bitrate (per-sensor bitrate ), the maximum pointwise distortion goes to zero as
or . This is shown to be possible
with only nearest-neighbor communication, distributed coding, and appropriate
interpolation algorithms. For a fixed, nonzero target distortion, the number of
fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal
Processing and re-submitted to the IEEE Transactions on Information Theor
Designing and testing inflationary models with Bayesian networks
Even simple inflationary scenarios have many free parameters. Beyond the
variables appearing in the inflationary action, these include dynamical initial
conditions, the number of fields, and couplings to other sectors. These
quantities are often ignored but cosmological observables can depend on the
unknown parameters. We use Bayesian networks to account for a large set of
inflationary parameters, deriving generative models for the primordial spectra
that are conditioned on a hierarchical set of prior probabilities describing
the initial conditions, reheating physics, and other free parameters. We use
--quadratic inflation as an illustrative example, finding that the number
of -folds between horizon exit for the pivot scale and the end of
inflation is typically the most important parameter, even when the number of
fields, their masses and initial conditions are unknown, along with possible
conditional dependencies between these parameters.Comment: 24 pages, 9 figures, 1 table; discussion update
Security considerations for Galois non-dual RLWE families
We explore further the hardness of the non-dual discrete variant of the
Ring-LWE problem for various number rings, give improved attacks for certain
rings satisfying some additional assumptions, construct a new family of
vulnerable Galois number fields, and apply some number theoretic results on
Gauss sums to deduce the likely failure of these attacks for 2-power cyclotomic
rings and unramified moduli
- …