14,221 research outputs found
Automated Discharging Arguments for Density Problems in Grids
Discharging arguments demonstrate a connection between local structure and
global averages. This makes it an effective tool for proving lower bounds on
the density of special sets in infinite grids. However, the minimum density of
an identifying code in the hexagonal grid remains open, with an upper bound of
and a lower bound of . We present a new, experimental framework for producing discharging
arguments using an algorithm. This algorithm replaces the lengthy case analysis
of human-written discharging arguments with a linear program that produces the
best possible lower bound using the specified set of discharging rules. We use
this framework to present a lower bound of on
the density of an identifying code in the hexagonal grid, and also find several
sharp lower bounds for variations on identifying codes in the hexagonal,
square, and triangular grids.Comment: This is an extended abstract, with 10 pages, 2 appendices, 5 tables,
and 2 figure
Experimental study of energy-minimizing point configurations on spheres
In this paper we report on massive computer experiments aimed at finding
spherical point configurations that minimize potential energy. We present
experimental evidence for two new universal optima (consisting of 40 points in
10 dimensions and 64 points in 14 dimensions), as well as evidence that there
are no others with at most 64 points. We also describe several other new
polytopes, and we present new geometrical descriptions of some of the known
universal optima.Comment: 41 pages, 12 figures, to appear in Experimental Mathematic
Voronoi-Like grid systems for tall buildings
In the context of innovative patterns for tall buildings, Voronoi tessellation is certainly worthy of interest. It is an irregular biomimetic pattern based on the Voronoi diagram, which derives from the direct observation of natural structures. The paper is mainly focused on the application of this nature-inspired typology to load-resisting systems for tall buildings, investigating the potential of non-regular grids on the global mechanical response of the structure. In particular, the study concentrates on the periodic and non-periodic Voronoi tessellation, describing the procedure for generating irregular patterns through parametric modeling and illustrates the homogenization-based approach proposed in the literature for dealing with unconventional patterns. To appreciate the consistency of preliminary design equations, numerical and analytical results are compared. Moreover, since the mechanical response of the building strongly depends on the parameters of the microstructure, the paper focuses on the influence of the grid arrangement on the global lateral stiffness, therefore on the displacement constraint, which is an essential requirement in the design of tall buildings. To this end, five case studies, accounting for different levels of irregularity and relative density, are generated and analyzed through static and modal analysis in the elastic field. In addition, the paper focuses on the mechanical response of a pattern with gradual rarefying density to evaluate its applicability to tall buildings. Displacement based optimizations are carried out to assess the adequate member cross sections that provide the maximum contribution in restraining deflection with the minimum material weight. The results obtained for all the models generated are compared and discussed to outline a final evaluation of the Voronoi structures. In addition to the wind loading scenario, the efficiency of the building model with varying density Voronoi pattern, is tested for seismic ground motion through a response spectrum analysis. The potential applications of Voronoi tessellation for tall buildings is demonstrated for both regions with high wind load conditions and areas of high seismicity
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Towards a Mathematical Theory of Super-Resolution
This paper develops a mathematical theory of super-resolution. Broadly
speaking, super-resolution is the problem of recovering the fine details of an
object---the high end of its spectrum---from coarse scale information
only---from samples at the low end of the spectrum. Suppose we have many point
sources at unknown locations in and with unknown complex-valued
amplitudes. We only observe Fourier samples of this object up until a frequency
cut-off . We show that one can super-resolve these point sources with
infinite precision---i.e. recover the exact locations and amplitudes---by
solving a simple convex optimization problem, which can essentially be
reformulated as a semidefinite program. This holds provided that the distance
between sources is at least . This result extends to higher dimensions
and other models. In one dimension for instance, it is possible to recover a
piecewise smooth function by resolving the discontinuity points with infinite
precision as well. We also show that the theory and methods are robust to
noise. In particular, in the discrete setting we develop some theoretical
results explaining how the accuracy of the super-resolved signal is expected to
degrade when both the noise level and the {\em super-resolution factor} vary.Comment: 48 pages, 12 figure
On location, domination and information retrieval
The thesis is divided into two main branches: identifying and locatingdominating codes, and information retrieval. The former topics are motivated by the aim to locate objects in sensor networks (or other similar applications) and the latter one by the need to retrieve information in memories such as DNA data storage systems. Albeit the underlying applications, the study on these topics mainly belongs to discrete mathematics; more specically, to the elds of coding and graph theory.
The sensor networks are usually represented by graphs where vertices represent the monitored locations and edges the connections between the locations. Moreover, the locations of the sensors are determined by a code. Furthermore, the desired properties of the sensor network are deeply linked with the properties of the underlying code.
The number of errors in reading the data is abundant in the DNA data storage systems. In particular, there can occur more errors than a reasonable error-correcting code can handle. However, this problem is somewhat oset by the possibility to obtain multiple approximations of the same information from the data storage. Hence, the information retrieval process can be modelled by the Levenshtein's channel model, where a message is sent through multiple noisy channels and multiple outputs are received. In the rst two papers of the thesis, we introduce and study the new concepts of self- and solid-locating-dominating codes as a natural analogy to self-identifying codes with respect to locating-dominating codes. The rst paper introduces these new codes and considers them in some graphs such as the Hamming graphs. Then, in the second paper, we broaden our view on the topic by considering graph theoretical questions. We give optimal codes in multiple dierent graph classes and some more general results using concepts such as the Dilworth number and graph complements. The third paper focuses on the q-ary Hamming spaces. In particular, we disprove a conjecture proposed by Goddard and Wash related to identifying codes. In the fourth paper, we return to self- and solid-locating-dominating codes and give optimal codes in some graph classes and consider their densities in innite graphs.
In the fth paper, we consider information retrieval in memories; in particular, the Levenshtein's channel model. In the channel model, we transmit some codeword belonging to the binary Hamming space through multiple identical channels. With the help of multiple dierent outputs, we give a list of codewords which may have been sent. In the paper, we study the number of channels required to have a rather small (constant) list size when the properties of the channels, the code and the dimension of the Hamming space are xed. In particular, we give an exact relation between the number of channels and the asymptotic value of the maximum list size.Väitöskirja käsittelee kahta aihetta: identioivia ja paikantavia peittokoodeja sekä tiedon noutamista muistista. Ensimmäisen aiheen motivaationa on objektien paikantaminen sensoriverkoista (sekä muut samankaltaiset sovellukset) ja jälkimmäisen tiedonnouto DNA-muisteista. Näiden aiheiden tutkimus kuuluu diskreettiin matematiikkaan, täsmällisemmin koodaus- ja graa-teoriaan.
Sensoriverkkoja kuvataan yleensä graafeilla, joissa solmut esittävät tarkkailtuja kohteita ja viivat yhteyksiä näiden kohteiden välillä. Edelleen sensorien paikat määräytyvät annetun koodin perusteella. Tästä johtuen sensoriverkon halutut ominaisuudet pohjautuvat vahvasti alla olevaan koodiin. Luettaessa tietoa DNA-muisteista tapahtuvien virheiden määrä saattaa olla erittäin suuri; erityisesti suurempi kuin kiinnitetyn virheitä korjaavan koodin korjauskyky. Toisaalta tilanne ei ole aivan näin ongelmallinen, sillä DNA-muisteista voidaan saada useita eri arvioita muistiin tallennetusta tiedosta. Näistä syistä johtuen tietojen noutamista DNA-muisteista voidaan mallintaa käyttäen Levenshteinin kanavamallia. Kanavamallissa yksi viesti lähetetään useiden häiriöisten kanavien kautta ja näin vastaanotetaan useita viestejä (yksi jokaisesta kanavasta).
Väitöskirjan kahdessa ensimmäisessä julkaisussa esitellään ja tutkitaan uusia paikantavien peittokoodien luokkia, jotka pohjautuvat aiemmin tutkittuihin itse-identioiviin koodeihin. Ensimmäisessä julkaisussa on esitelty nämä koodiluokat sekä tutkittu niitä joissain graafeissa kuten Hammingin graafeissa. Tämän jälkeen toisessa julkaisussa käsitellään yleisiä graa-teoreettisia kysymyksiä. Julkaisussa esitetään optimaaliset koodit useille graaperheille sekä joitain yleisempiä tuloksia käyttäen mm. Dilworthin lukua sekä graakomplementteja. Kolmas julkaisu keskittyy q-arisiin Hammingin avaruuksiin. Erityisesti julkaisussa todistetaan vääräksi Goddardin ja Washin aiemmin esittämä identioivia koodeja koskeva otaksuma. Neljäs artikkeli käsittelee jo kahdessa ensimmäisessä artikkelissa esiteltyjä paikantavien peittokoodien luokkia. Artikkeli esittää optimaalisia koodeja useille graaperheille sekä käsittelee äärettömiä graafeja.
Viides artikkeli käsittelee tiedonnoutoa ja erityisesti Levenshteinin kanavamallia. Kanavamallissa binääriseen Hammingin avaruuteen kuuluva koodisana lähetetään useiden identtisten kanavien läpi. Näistä kanavista vastaanotetaan useita eri arvioita lähetetystä koodisanasta ja rakennetaan lista mahdollisesti lähetetyistä sanoista. Artikkelissa tutkitaan kuinka monta kanavaa tarvitaan, jotta tämän listan koko on pieni (vakio), kun kanavien ominaisuudet, koodi ja Hammingin avaruuden dimensio on kiinnitetty. Erityisesti löydetään täsmällinen suhde kanavien lukumäärän ja asymptoottisesti maksimaalisen listan koon välille
- …