80 research outputs found

    Rank-Modulation Rewrite Coding for Flash Memories

    Get PDF
    The current flash memory technology focuses on the cost minimization of its static storage capacity. However, the resulting approach supports a relatively small number of program-erase cycles. This technology is effective for consumer devices (e.g., smartphones and cameras) where the number of program-erase cycles is small. However, it is not economical for enterprise storage systems that require a large number of lifetime writes. The proposed approach in this paper for alleviating this problem consists of the efficient integration of two key ideas: 1) improving reliability and endurance by representing the information using relative values via the rank modulation scheme and 2) increasing the overall (lifetime) capacity of the flash device via rewriting codes, namely, performing multiple writes per cell before erasure. This paper presents a new coding scheme that combines rank-modulation with rewriting. The key benefits of the new scheme include: 1) the ability to store close to 2 bit per cell on each write with minimal impact on the lifetime of the memory and 2) efficient encoding and decoding algorithms that make use of capacity-achieving write-once-memory codes that were proposed recently

    Graph Deep Learning: Methods and Applications

    Get PDF
    The past few years have seen the growing prevalence of deep neural networks on various application domains including image processing, computer vision, speech recognition, machine translation, self-driving cars, game playing, social networks, bioinformatics, and healthcare etc. Due to the broad applications and strong performance, deep learning, a subfield of machine learning and artificial intelligence, is changing everyone\u27s life.Graph learning has been another hot field among the machine learning and data mining communities, which learns knowledge from graph-structured data. Examples of graph learning range from social network analysis such as community detection and link prediction, to relational machine learning such as knowledge graph completion and recommender systems, to mutli-graph tasks such as graph classification and graph generation etc.An emerging new field, graph deep learning, aims at applying deep learning to graphs. To deal with graph-structured data, graph neural networks (GNNs) are invented in recent years which directly take graphs as input and output graph/node representations. Although GNNs have shown superior performance than traditional methods in tasks such as semi-supervised node classification, there still exist a wide range of other important graph learning problems where either GNNs\u27 applicabilities have not been explored or GNNs only have less satisfying performance.In this dissertation, we dive deeper into the field of graph deep learning. By developing new algorithms, architectures and theories, we push graph neural networks\u27 boundaries to a much wider range of graph learning problems. The problems we have explored include: 1) graph classification; 2) medical ontology embedding; 3) link prediction; 4) recommender systems; 5) graph generation; and 6) graph structure optimization.We first focus on two graph representation learning problems: graph classification and medical ontology embedding.For graph classification, we develop a novel deep GNN architecture which aggregates node features through a novel SortPooling layer that replaces the simple summing used in previous works. We demonstrate its state-of-the-art graph classification performance on benchmark datasets. For medical ontology embedding, we propose a novel hierarchical attention propagation model, which uses attention mechanism to learn embeddings of medical concepts from hierarchically-structured medical ontologies such as ICD-9 and CCS. We validate the learned embeddings on sequential procedure/diagnosis prediction tasks with real patient data.Then we investigate GNNs\u27 potential for predicting relations, specifically link prediction and recommender systems. For link prediction, we first develop a theory unifying various traditional link prediction heuristics, and then design a framework to automatically learn suitable heuristics from a given network based on GNNs. Our model shows unprecedented strong link prediction performance, significantly outperforming all traditional methods. For recommender systems, we propose a novel graph-based matrix completion model, which uses a GNN to learn graph structure features from the bipartite graph formed by user and item interactions. Our model not only outperforms various matrix completion baselines, but also demonstrates excellent transfer learning ability -- a model trained on MovieLens can be directly used to predict Douban movie ratings with high performance.Finally, we explore GNNs\u27 applicability to graph generation and graph structure optimization. We focus on a specific type of graphs which usually carry computations on them, namely directed acyclic graphs (DAGs). We develop a variational autoencoder (VAE) for DAGs and prove that it can injectively map computations into a latent space. This injectivity allows us to perform optimization in the continuous latent space instead of the original discrete structure space. We then apply our VAE to two types of DAGs, neural network architectures and Bayesian networks. Experiments show that our model not only generates novel and valid DAGs, but also finds high-quality neural architectures and Bayesian networks through performing Bayesian optimization in its latent space

    Computação quântica : autômatos, jogos e complexidade

    Get PDF
    Orientador: Arnaldo Vieira MouraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Desde seu surgimento, Teoria da Computação tem lidado com modelos computacionais de maneira matemática e abstrata. A noção de computação eficiente foi investigada usando esses modelos sem procurar entender as capacidades e limitações inerentes ao mundo físico. A Computação Quântica representa uma ruptura com esse paradigma. Enraizada nos postulados da Mecânica Quântica, ela é capaz de atribuir um sentido físico preciso à computação segundo nosso melhor entendimento da natureza. Esses postulados dão origem a propriedades fundamentalmente diferentes, uma em especial, chamada emaranhamento, é de importância central para computação e processamento de informação. O emaranhamento captura uma noção de correlação que é única a modelos quânticos. Essas correlações quânticas podem ser mais fortes do que qualquer correlação clássica estando dessa forma no coração de algumas capacidades quânticas que vão além do clássico. Nessa dissertação, nós investigamos o emaranhamento da perspectiva da complexidade computacional quântica. Mais precisamente, nós estudamos uma classe bem conhecida, definida em termos de verificação de provas, em que um verificador tem acesso à múltiplas provas não emaranhadas (QMA(k)). Assumir que as provas não contêm correlações quânticas parece ser uma hipótese não trivial, potencialmente fazendo com que essa classe seja maior do que aquela em que há apenas uma prova. Contudo, encontrar cotas de complexidade justas para QMA(k) permanece uma questão central sem resposta por mais de uma década. Nesse contexto, nossa contribuição é tripla. Primeiramente, estudamos classes relacionadas mostrando como alguns recursos computacionais podem afetar seu poder de forma a melhorar a compreensão a respeito da própria classe QMA(k). Em seguida, estabelecemos uma relação entre Probabilistically Checkable Proofs (PCP) clássicos e QMA(k). Isso nos permite recuperar resultados conhecidos de maneira unificada e simplificada. Para finalizar essa parte, mostramos que alguns caminhos para responder essa questão em aberto estão obstruídos por dificuldades computacionais. Em um segundo momento, voltamos nossa atenção para modelos restritos de computação quântica, mais especificamente, autômatos quânticos finitos. Um modelo conhecido como Two-way Quantum Classical Finite Automaton (2QCFA) é o objeto principal de nossa pesquisa. Seu estudo tem o intuito de revelar o poder computacional provido por memória quântica de dimensão finita. Nos estendemos esse autômato com a capacidade de colocar um número finito de marcadores na fita de entrada. Para qualquer número de marcadores, mostramos que essa extensão é mais poderosa do que seus análogos clássicos determinístico e probabilístico. Além de trazer avanços em duas linhas complementares de pesquisa, essa dissertação provê uma vasta exposição a ambos os campos: complexidade computacional e autômatosAbstract: Since its inception, Theoretical Computer Science has dealt with models of computation primarily in a very abstract and mathematical way. The notion of efficient computation was investigated using these models mainly without seeking to understand the inherent capabilities and limitations of the actual physical world. In this regard, Quantum Computing represents a rupture with respect to this paradigm. Rooted on the postulates of Quantum Mechanics, it is able to attribute a precise physical notion to computation as far as our understanding of nature goes. These postulates give rise to fundamentally different properties one of which, namely entanglement, is of central importance to computation and information processing tasks. Entanglement captures a notion of correlation unique to quantum models. This quantum correlation can be stronger than any classical one, thus being at the heart of some quantum super-classical capabilities. In this thesis, we investigate entanglement from the perspective of quantum computational complexity. More precisely, we study a well known complexity class, defined in terms of proof verification, in which a verifier has access to multiple unentangled quantum proofs (QMA(k)). Assuming the proofs do not exhibit quantum correlations seems to be a non-trivial hypothesis, potentially making this class larger than the one in which only a single proof is given. Notwithstanding, finding tight complexity bounds for QMA(k) has been a central open question in quantum complexity for over a decade. In this context, our contributions are threefold. Firstly, we study closely related classes showing how computational resources may affect its power in order to shed some light on \QMA(k) itself. Secondly, we establish a relationship between classical Probabilistically Checkable Proofs and QMA(k) allowing us to recover known results in unified and simplified way, besides exposing the interplay between them. Thirdly, we show that some paths to settle this open question are obstructed by computational hardness. In a second moment, we turn our attention to restricted models of quantum computation, more specifically, quantum finite automata. A model known as Two-way Quantum Classical Finite Automaton (2QCFA) is the main object of our inquiry. Its study is intended to reveal the computational power provided by finite dimensional quantum memory. We extend this automaton with the capability of placing a finite number of markers in the input tape. For any number of markers, we show that this extension is more powerful than its classical deterministic and probabilistic analogues. Besides bringing advances to these two complementary lines of inquiry, this thesis also provides a vast exposition to both subjects: computational complexity and automata theoryMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Generalized List Decoding

    Get PDF
    This paper concerns itself with the question of list decoding for general adversarial channels, e.g., bit-flip (XOR\textsf{XOR}) channels, erasure channels, AND\textsf{AND} (ZZ-) channels, OR\textsf{OR} channels, real adder channels, noisy typewriter channels, etc. We precisely characterize when exponential-sized (or positive rate) (L1)(L-1)-list decodable codes (where the list size LL is a universal constant) exist for such channels. Our criterion asserts that: "For any given general adversarial channel, it is possible to construct positive rate (L1)(L-1)-list decodable codes if and only if the set of completely positive tensors of order-LL with admissible marginals is not entirely contained in the order-LL confusability set associated to the channel." The sufficiency is shown via random code construction (combined with expurgation or time-sharing). The necessity is shown by 1. extracting equicoupled subcodes (generalization of equidistant code) from any large code sequence using hypergraph Ramsey's theorem, and 2. significantly extending the classic Plotkin bound in coding theory to list decoding for general channels using duality between the completely positive tensor cone and the copositive tensor cone. In the proof, we also obtain a new fact regarding asymmetry of joint distributions, which be may of independent interest. Other results include 1. List decoding capacity with asymptotically large LL for general adversarial channels; 2. A tight list size bound for most constant composition codes (generalization of constant weight codes); 3. Rederivation and demystification of Blinovsky's [Bli86] characterization of the list decoding Plotkin points (threshold at which large codes are impossible); 4. Evaluation of general bounds ([WBBJ]) for unique decoding in the error correction code setting

    Space programs summary 37-64. Volume 3 - Supporting research and advanced development, 1 June - 31 July 1970

    Get PDF
    Interplanetary flight missions and systems development for thermoelectric outer planet spacecraf

    Residue number system coded differential space-time-frequency coding.

    Get PDF
    Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2007.The rapidly growing need for fast and reliable transmission over a wireless channel motivates the development of communication systems that can support high data rates at low complexity. Achieving reliable communication over a wireless channel is a challenging task largely due to the possibility of multipaths which may lead to intersymbol interference (ISI). Diversity techniques such as time, frequency and space are commonly used to combat multipath fading. Classical diversity techniques use repetition codes such that the information is replicated and transmitted over several channels that are sufficiently spaced. In fading channels, the performance across some diversity branches may be excessively attenuated, making throughput unacceptably small. In principle, more powerful coding techniques can be used to maximize the diversity order. This leads to bandwidth expansion or increased transmission power to accommodate the redundant bits. Hence there is need for coding and modulation schemes that provide low error rate performance in a bandwidth efficient manner. If diversity schemes are combined, more independent dimensions become available for information transfer. The first part of the thesis addresses achieving temporal diversity through employing error correcting coding schemes combined with interleaving. Noncoherent differential modulation does not require explicit knowledge or estimate of the channel, instead the information is encoded in the transitions. This lends itself to the possibility of turbo-like serial concatenation of a standard outer channel encoder with an inner modulation code amenable to noncoherent detection through an interleaver. An iterative approach to joint decoding and demodulation can be realized by exchanging soft information between the decoder and the demodulator. This has been shown to be effective and hold hope for approaching capacity over fast fading channels. However most of these schemes employ low rate convolutional codes as their channel encoders. In this thesis we propose the use of redundant residue number system codes. It is shown that these codes can achieve comparable performance at minimal complexity and high data rates. The second part deals with the possibility of combining several diversity dimensions into a reliable bandwidth efficient communication scheme. Orthogonal frequency division multiplexing (OFDM) has been used to combat multipaths. Combining OFDM with multiple-input multiple-output (MIMO) systems to form MIMO-OFDM not only reduces the complexity by eliminating the need for equalization but also provides large channel capacity and a high diversity potential. Space-time coded OFDM was proposed and shown to be an effective transmission technique for MIMO systems. Spacefrequency coding and space-time-frequency coding were developed out of the need to exploit the frequency diversity due to multipaths. Most of the proposed schemes in the literature maximize frequency diversity predominantly from the frequency-selective nature of the fading channel. In this thesis we propose the use of residue number system as the frequency encoder. It is shown that the proposed space-time-frequency coding scheme can maximize the diversity gains over space, time and frequency domains. The gain of MIMO-OFDM comes at the expense of increased receiver complexity. Furthermore, most of the proposed space-time-frequency coding schemes assume frequency selective block fading channels which is not an ideal assumption for broadband wireless communications. Relatively high mobility in broadband wireless communications systems may result in high Doppler frequency, hence time-selective (rapid) fading. Rapidly changing channel characteristics impedes the channel estimation process and may result in incorrect estimates of the channel coefficients. The last part of the thesis deals with the performance of differential space-time-frequency coding in fast fading channels

    Secure and Efficient Comparisons between Untrusted Parties

    Get PDF
    A vast number of online services is based on users contributing their personal information. Examples are manifold, including social networks, electronic commerce, sharing websites, lodging platforms, and genealogy. In all cases user privacy depends on a collective trust upon all involved intermediaries, like service providers, operators, administrators or even help desk staff. A single adversarial party in the whole chain of trust voids user privacy. Even more, the number of intermediaries is ever growing. Thus, user privacy must be preserved at every time and stage, independent of the intrinsic goals any involved party. Furthermore, next to these new services, traditional offline analytic systems are replaced by online services run in large data centers. Centralized processing of electronic medical records, genomic data or other health-related information is anticipated due to advances in medical research, better analytic results based on large amounts of medical information and lowered costs. In these scenarios privacy is of utmost concern due to the large amount of personal information contained within the centralized data. We focus on the challenge of privacy-preserving processing on genomic data, specifically comparing genomic sequences. The problem that arises is how to efficiently compare private sequences of two parties while preserving confidentiality of the compared data. It follows that the privacy of the data owner must be preserved, which means that as little information as possible must be leaked to any party participating in the comparison. Leakage can happen at several points during a comparison. The secured inputs for the comparing party might leak some information about the original input, or the output might leak information about the inputs. In the latter case, results of several comparisons can be combined to infer information about the confidential input of the party under observation. Genomic sequences serve as a use-case, but the proposed solutions are more general and can be applied to the generic field of privacy-preserving comparison of sequences. The solution should be efficient such that performing a comparison yields runtimes linear in the length of the input sequences and thus producing acceptable costs for a typical use-case. To tackle the problem of efficient, privacy-preserving sequence comparisons, we propose a framework consisting of three main parts. a) The basic protocol presents an efficient sequence comparison algorithm, which transforms a sequence into a set representation, allowing to approximate distance measures over input sequences using distance measures over sets. The sets are then represented by an efficient data structure - the Bloom filter -, which allows evaluation of certain set operations without storing the actual elements of the possibly large set. This representation yields low distortion for comparing similar sequences. Operations upon the set representation are carried out using efficient, partially homomorphic cryptographic systems for data confidentiality of the inputs. The output can be adjusted to either return the actual approximated distance or the result of an in-range check of the approximated distance. b) Building upon this efficient basic protocol we introduce the first mechanism to reduce the success of inference attacks by detecting and rejecting similar queries in a privacy-preserving way. This is achieved by generating generalized commitments for inputs. This generalization is done by treating inputs as messages received from a noise channel, upon which error-correction from coding theory is applied. This way similar inputs are defined as inputs having a hamming distance of their generalized inputs below a certain predefined threshold. We present a protocol to perform a zero-knowledge proof to assess if the generalized input is indeed a generalization of the actual input. Furthermore, we generalize a very efficient inference attack on privacy-preserving sequence comparison protocols and use it to evaluate our inference-control mechanism. c) The third part of the framework lightens the computational load of the client taking part in the comparison protocol by presenting a compression mechanism for partially homomorphic cryptographic schemes. It reduces the transmission and storage overhead induced by the semantically secure homomorphic encryption schemes, as well as encryption latency. The compression is achieved by constructing an asymmetric stream cipher such that the generated ciphertext can be converted into a ciphertext of an associated homomorphic encryption scheme without revealing any information about the plaintext. This is the first compression scheme available for partially homomorphic encryption schemes. Compression of ciphertexts of fully homomorphic encryption schemes are several orders of magnitude slower at the conversion from the transmission ciphertext to the homomorphically encrypted ciphertext. Indeed our compression scheme achieves optimal conversion performance. It further allows to generate keystreams offline and thus supports offloading to trusted devices. This way transmission-, storage- and power-efficiency is improved. We give security proofs for all relevant parts of the proposed protocols and algorithms to evaluate their security. A performance evaluation of the core components demonstrates the practicability of our proposed solutions including a theoretical analysis and practical experiments to show the accuracy as well as efficiency of approximations and probabilistic algorithms. Several variations and configurations to detect similar inputs are studied during an in-depth discussion of the inference-control mechanism. A human mitochondrial genome database is used for the practical evaluation to compare genomic sequences and detect similar inputs as described by the use-case. In summary we show that it is indeed possible to construct an efficient and privacy-preserving (genomic) sequences comparison, while being able to control the amount of information that leaves the comparison. To the best of our knowledge we also contribute to the field by proposing the first efficient privacy-preserving inference detection and control mechanism, as well as the first ciphertext compression system for partially homomorphic cryptographic systems

    Subject index volumes 1–92

    Get PDF
    corecore