409 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Local Editing in Lempel-Ziv Compressed Data
This thesis explores the problem of editing data while compressed by a variant of Lempel-Ziv compression. We show that the random-access properties of the LZ-End compression allow random edits, and present the first algorithm to achieve this. The thesis goes on to adapt the LZ-End parsing so that the random access properties become local access, which has tighter memory bounds. Furthermore, the new parsing allows a much improved algorithm to edit the compressed data
New Protocols for Conference Key and Multipartite Entanglement Distillation
We approach two interconnected problems of quantum information processing in
networks: Conference key agreement and entanglement distillation, both in the
so-called source model where the given resource is a multipartite quantum state
and the players interact over public classical channels to generate the desired
correlation. The first problem is the distillation of a conference key when the
source state is shared between a number of legal players and an eavesdropper;
the eavesdropper, apart from starting off with this quantum side information,
also observes the public communication between the players. The second is the
distillation of Greenberger-Horne-Zeilinger (GHZ) states by means of local
operations and classical communication (LOCC) from the given mixed state. These
problem settings extend our previous paper [IEEE Trans. Inf. Theory
68(2):976-988, 2022], and we generalise its results: using a quantum version of
the task of communication for omniscience, we derive novel lower bounds on the
distillable conference key from any multipartite quantum state by means of
non-interacting communication protocols. Secondly, we establish novel lower
bounds on the yield of GHZ states from multipartite mixed states. Namely, we
present two methods to produce bipartite entanglement between sufficiently many
nodes so as to produce GHZ states. Next, we show that the conference key
agreement protocol can be made coherent under certain conditions, enabling the
direct generation of multipartite GHZ states
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Designing a New Tactile Display Technology and its Disability Interactions
People with visual impairments have a strong desire for a refreshable tactile interface that can provide immediate access to full page of Braille and tactile graphics. Regrettably, existing devices come at a considerable expense and remain out of reach for many. The exorbitant costs associated with current tactile displays stem from their intricate design and the multitude of components needed for their construction. This underscores the pressing need for technological innovation that can enhance tactile displays, making them more accessible and available to individuals with visual impairments. This research thesis delves into the development of a novel tactile display technology known as Tacilia. This technology's necessity and prerequisites are informed by in-depth qualitative engagements with students who have visual impairments, alongside a systematic analysis of the prevailing architectures underpinning existing tactile display technologies. The evolution of Tacilia unfolds through iterative processes encompassing conceptualisation, prototyping, and evaluation. With Tacilia, three distinct products and interactive experiences are explored, empowering individuals to manually draw tactile graphics, generate digitally designed media through printing, and display these creations on a dynamic pin array display. This innovation underscores Tacilia's capability to streamline the creation of refreshable tactile displays, rendering them more fitting, usable, and economically viable for people with visual impairments
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
De l'apprentissage faiblement supervisé au catalogage en ligne
Applied mathematics and machine computations have raised a lot of hope since the recent success of supervised learning. Many practitioners in industries have been trying to switch from their old paradigms to machine learning. Interestingly, those data scientists spend more time scrapping, annotating and cleaning data than fine-tuning models. This thesis is motivated by the following question: can we derive a more generic framework than the one of supervised learning in order to learn from clutter data? This question is approached through the lens of weakly supervised learning, assuming that the bottleneck of data collection lies in annotation. We model weak supervision as giving, rather than a unique target, a set of target candidates. We argue that one should look for an âoptimisticâ function that matches most of the observations. This allows us to derive a principle to disambiguate partial labels. We also discuss the advantage to incorporate unsupervised learning techniques into our framework, in particular manifold regularization approached through diffusion techniques, for which we derived a new algorithm that scales better with input dimension then the baseline method. Finally, we switch from passive to active weakly supervised learning, introducing the âactive labelingâ framework, in which a practitioner can query weak information about chosen data. Among others, we leverage the fact that one does not need full information to access stochastic gradients and perform stochastic gradient descent.Les mathĂ©matiques appliquĂ©es et le calcul nourrissent beaucoup dâespoirs Ă la suite des succĂšs rĂ©cents de lâapprentissage supervisĂ©. Dans lâindustrie, beaucoup dâingĂ©nieurs cherchent Ă remplacer leurs anciens paradigmes de pensĂ©e par lâapprentissage machine. Ătonnamment, ces ingĂ©nieurs passent plus de temps Ă collecter, annoter et nettoyer des donnĂ©es quâĂ raffiner des modĂšles. Ce phĂ©nomĂšne motive la problĂ©matique de cette thĂšse: peut-on dĂ©finir un cadre thĂ©orique plus gĂ©nĂ©ral que lâapprentissage supervisĂ© pour apprendre grĂące Ă des donnĂ©es hĂ©tĂ©rogĂšnes? Cette question est abordĂ©e via le concept de supervision faible, faisant lâhypothĂšse que le problĂšme que posent les donnĂ©es est leur annotation. On modĂ©lise la supervision faible comme lâaccĂšs, pour une entrĂ©e donnĂ©e, non pas dâune sortie claire, mais dâun ensemble de sorties potentielles. On plaide pour lâadoption dâune perspective « optimiste » et lâapprentissage dâune fonction qui vĂ©rifie la plupart des observations. Cette perspective nous permet de dĂ©finir un principe pour lever lâambiguĂŻtĂ© des informations faibles. On discute Ă©galement de lâimportance dâincorporer des techniques sans supervision dâapprĂ©hension des donnĂ©es dâentrĂ©e dans notre thĂ©orie, en particulier de comprĂ©hension de la variĂ©tĂ© sous-jacente via des techniques de diffusion, pour lesquelles on propose un algorithme rĂ©aliste afin dâĂ©viter le flĂ©au de la dimension, Ă lâinverse de ce qui existait jusquâalors. Enfin, nous nous attaquons Ă la question de collecte active dâinformations faibles, dĂ©finissant le problĂšme de « catalogage en ligne », oĂč un intendant doit acquĂ©rir une maximum dâinformations fiables sur ses donnĂ©es sous une contrainte de budget. Entre autres, nous tirons parti du fait que pour obtenir un gradient stochastique et effectuer une descente de gradient, il nây a pas besoin de supervision totale
Play Among Books
How does coding change the way we think about architecture? Miro Roman and his AI Alice_ch3n81 develop a playful scenario in which they propose coding as the new literacy of information. They convey knowledge in the form of a project model that links the fields of architecture and information through two interwoven narrative strands in an âinfinite flowâ of real books
- âŠ