305 research outputs found
Dynamics of trimming the content of face representations for categorization in the brain
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300
A linear-time benchmarking tool for generalized surface codes
Quantum information processors need to be protected against errors and faults. One of the most widely considered fault-tolerant architecture is based on surface codes. While the general principles of these codes are well understood and basic code properties such as minimum distance and rate are easy to characterize, a code's average performance depends on the detailed geometric layout of the qubits. To date, optimizing a surface code architecture and comparing different geometric layouts relies on costly numerical simulations. Here, we propose a benchmarking algorithm for simulating the performance of surface codes, and generalizations thereof, that runs in linear time. We
implemented this algorithm in a software that generates performance reports and allows to quickly compare different architectures
Qudit Colour Codes and Gauge Colour Codes in All Spatial Dimensions
Two-level quantum systems, qubits, are not the only basis for quantum
computation. Advantages exist in using qudits, d-level quantum systems, as the
basic carrier of quantum information. We show that color codes, a class of
topological quantum codes with remarkable transversality properties, can be
generalized to the qudit paradigm. In recent developments it was found that in
three spatial dimensions a qubit color code can support a transversal
non-Clifford gate, and that in higher spatial dimensions additional
non-Clifford gates can be found, saturating Bravyi and K\"onig's bound [Phys.
Rev. Lett. 110, 170503 (2013)]. Furthermore, by using gauge fixing techniques,
an effective set of Clifford gates can be achieved, removing the need for state
distillation. We show that the qudit color code can support the qudit analogues
of these gates, and show that in higher spatial dimensions a color code can
support a phase gate from higher levels of the Clifford hierarchy which can be
proven to saturate Bravyi and K\"onig's bound in all but a finite number of
special cases. The methodology used is a generalisation of Bravyi and Haah's
method of triorthogonal matrices [Phys. Rev. A 86 052329 (2012)], which may be
of independent interest. For completeness, we show explicitly that the qudit
color codes generalize to gauge color codes, and share the many of the
favorable properties of their qubit counterparts.Comment: Authors' final cop
Topological Order, Quantum Codes and Quantum Computation on Fractal Geometries
We investigate topological order on fractal geometries embedded in
dimensions. In particular, we diagnose the existence of the topological order
through the lens of quantum information and geometry, i.e., via its equivalence
to a quantum error-correcting code with a macroscopic code distance or the
presence of macroscopic systoles in systolic geometry. We first prove a no-go
theorem that topological order cannot survive on any fractal
embedded in 2D. For fractal lattice models embedded in 3D or higher spatial
dimensions, topological order survives if the boundaries of the
interior holes condense only loop or membrane excitations. Moreover, for a
class of models containing only loop or membrane excitations, and are hence
self-correcting on an -dimensional manifold, we prove that topological order
survives on a large class of fractal geometries independent of the type of hole
boundaries. We further construct fault-tolerant logical gates using their
connection to global and higher-form topological symmetries. In particular, we
have discovered a logical CCZ gate corresponding to a global symmetry in a
class of fractal codes embedded in 3D with Hausdorff dimension asymptotically
approaching for arbitrarily small , which hence only
requires a space-overhead with being the code
distance. This in turn leads to the surprising discovery of certain exotic
gapped boundaries that only condense the combination of loop excitations and
gapped domain walls. We further obtain logical gates
with on fractal codes embedded in D. In particular, for the
logical in the level of Clifford
hierarchy, we can reduce the space overhead to .
Mathematically, our findings correspond to macroscopic relative systoles in
fractals.Comment: 46+10 pages, fixed typos and the table content, updated funding
informatio
Storage and Retrieval Codes in PIR Schemes with Colluding Servers
Private information retrieval (PIR) schemes (with or without colluding
servers) have been proposed for realistic coded distributed data storage
systems. Star product PIR schemes with colluding servers for general coded
distributed storage system were constructed over general finite fields by R.
Freij-Hollanti, O. W. Gnilke, C. Hollanti and A. Karpuk in 2017. These star
product PIR schemes with colluding servers are suitable for the storage of
files over small fields and can be constructed for coded distributed storage
system with large number of servers. In this paper for an efficient storage
code, the problem to find good retrieval codes is considered. In general if the
storage code is a binary Reed-Muller code the retrieval code needs not to be a
binary Reed-Muller code in general. It is proved that when the storage code
contains some special codewords, nonzero retrieval rate star product PIR
schemes with colluding servers can only protect against small number of
colluding servers. We also give examples to show that when the storage code is
a good cyclic code, the best choice of the retrieval code is not cyclic in
general. Therefore in the design of star product PIR schemes with colluding
servers, the scheme with the storage code and the retrieval code in the same
family of algebraic codes is not always efficient.Comment: 25 pages,PIR schemes with the storage code and the retrieval code in
the same family of algebraic codes seem not always efficient. arXiv admin
note: text overlap with arXiv:2207.0316
Lightweight Architectures for Reliable and Fault Detection Simon and Speck Cryptographic Algorithms on FPGA
The widespread use of sensitive and constrained applications necessitates lightweight (lowpower and low-area) algorithms developed for constrained nano-devices. However, nearly all of such algorithms are optimized for platform-based performance and may not be useful for diverse and flexible applications. The National Security Agency (NSA) has proposed two relatively-recent families of lightweight ciphers, i.e., Simon and Speck, designed as efficient ciphers on both hardware and software platforms. This paper proposes concurrent error detection schemes to provide reliable architectures for these two families of lightweight block ciphers. The research work on analyzing the reliability of these algorithms and providing fault diagnosis approaches has not been undertaken to date to the best of our knowledge. The main aim of the proposed reliable architectures is to provide high error coverage while maintaining acceptable area and power consumption overheads. To achieve this, we propose a variant of recomputing with encoded operands. These low-complexity schemes are suited for lowresource applications such as sensitive, constrained implantable and wearable medical devices. We perform fault simulations for the proposed architectures by developing a fault model framework. The architectures are simulated and analyzed on recent field-programmable grate array (FPGA) platforms, and it is shown that the proposed schemes provide high error coverage. The proposed low-complexity concurrent error detection schemes are a step forward towards more reliable architectures for Simon and Speck algorithms in lightweight, secure applications
Solutions for New Terrestrial Broadcasting Systems Offering Simultaneously Stationary and Mobile Services
221 p.[EN]Since the first broadcasted TV signal was transmitted in the early decades of
the past century, the television broadcasting industry has experienced a series of
dramatic changes. Most recently, following the evolution from analogue to digital
systems, the digital dividend has become one of the main concerns of the
broadcasting industry. In fact, there are many international spectrum authorities
reclaiming part of the broadcasting spectrum to satisfy the growing demand of
other services, such as broadband wireless services, arguing that the TV services
are not very spectrum-efficient.
Apart from that, it must be taken into account that, even if up to now the
mobile broadcasting has not been considered a major requirement, this will
probably change in the near future. In fact, it is expected that the global mobile
data traffic will increase 11-fold between 2014 and 2018, and what is more, over
two thirds of the data traffic will be video stream by the end of that period.
Therefore, the capability to receive HD services anywhere with a mobile device is
going to be a mandatory requirement for any new generation broadcasting system.
The main objective of this work is to present several technical solutions that
answer to these challenges. In particular, the main questions to be solved are the
spectrum efficiency issue and the increasing user expectations of receiving high
quality mobile services. In other words, the main objective is to provide technical
solutions for an efficient and flexible usage of the terrestrial broadcasting spectrum
for both stationary and mobile services.
The first contributions of this scientific work are closely related to the study of
the mobile broadcast reception. Firstly, a comprehensive mathematical analysis of
the OFDM signal behaviour over time-varying channels is presented. In order to
maximize the channel capacity in mobile environments, channel estimation and
equalization are studied in depth. First, the most implemented equalization
solutions in time-varying scenarios are analyzed, and then, based on these existing
techniques, a new equalization algorithm is proposed for enhancing the receivers’
performance.
An alternative solution for improving the efficiency under mobile channel
conditions is treating the Inter Carrier Interference as another noise source.
Specifically, after analyzing the ICI impact and the existing solutions for reducing
the ICI penalty, a new approach based on the robustness of FEC codes is
presented. This new approach employs one dimensional algorithms at the receiver
and entrusts the ICI removing task to the robust forward error correction codes.
Finally, another major contribution of this work is the presentation of the
Layer Division Multiplexing (LDM) as a spectrum-efficient and flexible solution
for offering stationary and mobile services simultaneously. The comprehensive
theoretical study developed here verifies the improved spectrum efficiency,
whereas the included practical validation confirms the feasibility of the system and
presents it as a very promising multiplexing technique, which will surely be a strong
candidate for the next generation broadcasting services.[ES]Desde el comienzo de la transmisión de las primeras señales de televisión a
principios del siglo pasado, la radiodifusión digital ha evolucionado gracias a una
serie de cambios relevantes. Recientemente, como consecuencia directa de la
digitalización del servicio, el dividendo digital se ha convertido en uno de los
caballos de batalla de la industria de la radiodifusión. De hecho, no son pocos los
consorcios internacionales que abogan por asignar parte del espectro de
radiodifusión a otros servicios como, por ejemplo, la telefonÃa móvil, argumentado
la poca eficiencia espectral de la tecnologÃa de radiodifusión actual.
Asimismo, se debe tener en cuenta que a pesar de que los servicios móviles no
se han considerado fundamentales en el pasado, esta tendencia probablemente
variará en el futuro cercano. De hecho, se espera que el tráfico derivado de
servicios móviles se multiplique por once entre los años 2014 y 2018; y lo que es
más importante, se pronostica que dos tercios del tráfico móvil sea video streaming
para finales de ese periodo. Por lo tanto, la posibilidad de ofrecer servicios de alta
definición en dispositivos móviles es un requisito fundamental para los sistemas de
radiodifusión de nueva generación.
El principal objetivo de este trabajo es presentar soluciones técnicas que den
respuesta a los retos planteados anteriormente. En particular, las principales
cuestiones a resolver son la ineficiencia espectral y el incremento de usuarios que
demandan mayor calidad en los contenidos para dispositivos móviles. En pocas
palabras, el principal objetivo de este trabajo se basa en ofrecer una solución más
eficiente y flexible para la transmisión simultánea de servicios fijos y móviles.
La primera contribución relevante de este trabajo está relacionada con la
recepción de la señal de televisión en movimiento. En primer lugar, se presenta un
completo análisis matemático del comportamiento de la señal OFDM en canales
variantes con el tiempo. A continuación, con la intención de maximizar la
capacidad del canal, se estudian en profundidad los algoritmos de estimación y
ecualización. Posteriormente, se analizan los algoritmos de ecualización más
implementados, y por último, basándose en estas técnicas, se propone un nuevo
algoritmo de ecualización para aumentar el rendimiento de los receptores en tales
condiciones.
Del mismo modo, se plantea un nuevo enfoque para mejorar la eficiencia de
los servicios móviles basado en tratar la interferencia entre portadoras como una
fuente de ruido. Concretamente, tras analizar el impacto del ICI en los receptores
actuales, se sugiere delegar el trabajo de corrección de dichas distorsiones en
códigos FEC muy robustos.
Finalmente, la última contribución importante de este trabajo es la
presentación de la tecnologÃa LDM como una manera más eficiente y flexible para
la transmisión simultánea de servicios fijos y móviles. El análisis teórico presentado
confirma el incremento en la eficiencia espectral, mientras que el estudio práctico
valida la posible implementación del sistema y presenta la tecnologÃa LDM c
- …