780 research outputs found

    Cloud-based homomorphic encryption for privacy-preserving machine learning in clinical decision support

    Get PDF
    While privacy and security concerns dominate public cloud services, Homomorphic Encryption (HE) is seen as an emerging solution that ensures secure processing of sensitive data via untrusted networks in the public cloud or by third-party cloud vendors. It relies on the fact that some encryption algorithms display the property of homomorphism, which allows them to manipulate data meaningfully while still in encrypted form; although there are major stumbling blocks to overcome before the technology is considered mature for production cloud environments. Such a framework would find particular relevance in Clinical Decision Support (CDS) applications deployed in the public cloud. CDS applications have an important computational and analytical role over confidential healthcare information with the aim of supporting decision-making in clinical practice. Machine Learning (ML) is employed in CDS applications that typically learn and can personalise actions based on individual behaviour. A relatively simple-to-implement, common and consistent framework is sought that can overcome most limitations of Fully Homomorphic Encryption (FHE) in order to offer an expanded and flexible set of HE capabilities. In the absence of a significant breakthrough in FHE efficiency and practical use, it would appear that a solution relying on client interactions is the best known entity for meeting the requirements of private CDS-based computation, so long as security is not significantly compromised. A hybrid solution is introduced, that intersperses limited two-party interactions amongst the main homomorphic computations, allowing exchange of both numerical and logical cryptographic contexts in addition to resolving other major FHE limitations. Interactions involve the use of client-based ciphertext decryptions blinded by data obfuscation techniques, to maintain privacy. This thesis explores the middle ground whereby HE schemes can provide improved and efficient arbitrary computational functionality over a significantly reduced two-party network interaction model involving data obfuscation techniques. This compromise allows for the powerful capabilities of HE to be leveraged, providing a more uniform, flexible and general approach to privacy-preserving system integration, which is suitable for cloud deployment. The proposed platform is uniquely designed to make HE more practical for mainstream clinical application use, equipped with a rich set of capabilities and potentially very complex depth of HE operations. Such a solution would be suitable for the long-term privacy preserving-processing requirements of a cloud-based CDS system, which would typically require complex combinatorial logic, workflow and ML capabilities

    A Primer on MIMO Detection Algorithms for 5G Communication Network

    Get PDF
    In the recent past, demand for large use of mobile data has increased tremendously due to the proliferation of hand held devices which allows millions of people access to video streaming, VOIP and other internet related usage including machine to machine (M2M) communication. One of the anticipated attribute of the fifth generation (5G) network is its ability to meet this humongous data rate requirement in the order of 10s Gbps. A particular promising technology that can provide this desired performance if used in the 5G network is the massive multiple-input, multiple-output otherwise called the Massive MIMO. The use of massive MIMO in 5G cellular network where data rate of the order of 100x that of the current state of the art LTE-A is expected and high spectral efficiency with very low latency and low energy consumption, present a challenge in symbol/signal detection and parameter estimation as a result of the high dimension of the antenna elements required. One of the major bottlenecks in achieving the benefits of such massive MIMO systems is the problem of achieving detectors with realistic low complexity for such huge systems. We therefore review various MIMO detection algorithms aiming for low computational complexity with high performance and that scales well with increase in transmit antennas suitable for massive MIMO systems. We evaluate detection algorithms for small and medium dimension MIMO as well as a combination of some of them in order to achieve our above objectives. The review shows no single one detector can be said to be ideal for massive MIMO and that the low complexity with optimal performance detector suitable for 5G massive MIMO system is still an open research issue. A comprehensive review of such detection algorithms for massive MIMO was not presented in the literature which was achieved in this work

    Design and implimentationof Multi-user MIMO precoding algorithms

    Get PDF
    The demand for high-speed communications required by cutting-edge applications has put a strain on the already saturated wireless spectrum. The incorporation of antenna arrays at both ends of the communication link has provided improved spectral efficiency and link reliability to the inherently complex wireless environment, thus allowing for the thriving of high data-rate applications without the cost of extra bandwidth consumption. As a consequence to this, multiple-input multiple-output (MIMO) systems have become the key technology for wideband communication standards both in single-user and multi-user setups. The main difficulty in single-user MIMO systems stems from the signal detection stage at the receiver, whereas multi-user downlink systems struggle with the challenge of enabling non-cooperative signal acquisition at the user terminals. In this respect, precoding techniques perform a pre-equalization stage at the base station so that the signal at each receiver can be interpreted independently and without the knowledge of the overall channel state. Vector precoding (VP) has been recently proposed for non-cooperative signal acquisition in the multi-user broadcast channel. The performance advantage with respect to the more straightforward linear precoding algorithms is the result of an added perturbation vector which enhances the properties of the precoded signal. Nevertheless, the computation of the perturbation signal entails a search for the closest point in an in nite lattice, which is known to be in the class of non-deterministic polynomial-time hard (NP-hard) problems. This thesis addresses the difficulties that stem from the perturbation process in VP systems from both theoretical and practical perspectives. On one hand, the asymptotic performance of VP is analyzed assuming optimal decoding. Since the perturbation process hinders the analytical assessment of the VP performance, lower and upper bounds on the expected data rate are reviewed and proposed. Based on these bounds, VP is compared to linear precoding with respect to the performance after a weighted sum rate optimization, the power resulting from a quality of service (QoS) formulation, and the performance when balancing the user rates. On the other hand, the intricacies of performing an efficient computation of the perturbation vector are analyzed. This study is focused on tree-search techniques that, by means of an strategic node pruning policy, reduce the complexity derived from an exhaustive search and yield a close-to-optimum performance. To that respect, three tree-search algorithms are proposed. The xed-sphere encoder (FSE) features a constant data path and a non-iterative architecture that enable the parallel processing of the set of vector hypotheses and thus, allow for high-data processing rates. The sequential best-node expansion (SBE) algorithm applies a distance control policy to reduce the amount of metric computations performed during the tree traversal. Finally, the low-complexity SBE (LC-SBE) aims at reducing the complexity and latency of the aforementioned algorithm by combining an approximate distance computation model and a novel approach of variable run-time constraints. Furthermore, the hardware implementation of non-recursive tree-search algorithms for the precoding scenario is also addressed in this thesis. More specifically, the hardware architecture design and resource occupation of the FSE and K-Best xed-complexity treesearch techniques are presented. The determination of the ordered sequence of complexvalued nodes, also known as the Schnorr-Euchner enumeration, is required in order to select the nodes to be evaluated during the tree traversal. With the aim of minimizing the hardware resource demand of such a computationally-expensive task, a novel non-sequential and lowcomplexity enumeration algorithm is presented, which enables the independent selection of the nodes within the ordered sequence. The incorporation of the proposed enumeration technique along with a fully-pipelined architecture of the FSE and K-Best approaches, allow for data processing throughputs of up to 5 Gbps in a 4x4 antenna setup.Aplikazio abangoardistek beharrezko duten abiadura handiko komunikazioen eskaerak presio handia ezarri du dagoeneko saturatuta dagoen haririk gabeko espektruan. Komunikazio loturaren bi muturretan antena array-en erabilerak eraginkortasun espektral eta dagarritasun handiagoez hornitu du berez konplexua den haririk gabeko ingurunea, modu honetan banda zabalera gehigarririk gabeko abiadura handiko aplikazioen garapena ahalbidetuz. Honen ondorioz, multiple-input multiple output (MIMO) sistemak banda zabaleko komunikazio estandarren funtsezko teknologia bihurtu dira, erabiltzaile bakarreko ezarpenetan hala nola erabiltzaile anitzeko inguruneetan. Erabiltzaile bakarreko MIMO sistemen zailtasun garrantzitsuena hartzailean ematen den seinalearen detekzio fasean datza. Erabiltzaile anitzeko sistemetan, aldiz, erronka nagusiena datu jasotze ez kooperatiboa bermatzea da. Prekodi kazio teknikek hartzaile bakoitzaren seinalea kanalaren egoera orokorraren ezagutzarik gabe eta modu independiente baten interpretatzea ahalbidetzen dute estazio nagusian seinalearen pre-ekualizazio fase bat inposatuz. Azken aldian, prekodi kazio bektoriala (VP, ingelesez vector precoding) proposatu da erabiltzaile anitzeko igorpen kanalean seinalearen eskuratze ez kooperatiboa ahalbidetzeko. Perturbazio seinale baten erabilerak, prekodi katutako seinalearen ezaugarriak hobetzeaz gain, errendimenduaren hobekuntza nabarmen bat lortzen du prekodi kazio linearreko teknikekiko. Hala ere, perturbazio seinalearen kalkuluak sare in nitu baten puntu hurbilenaren bilaketa suposatzen du. Problema honen ebazpenaren konplexutasuna denbora polinomialean ez deterministikoa dela jakina da. Doktoretza tesi honen helburu nagusia VP sistemetan perturbazio prozesuaren ondorioz ematen diren zailtasun teoriko eta praktikoei irtenbide egoki bat ematea da. Alde batetik, seinale/zarata ratio handiko ingurunetan VP sistemen errendimendua aztertzen da, beti ere deskodetze optimoa ematen dela suposatuz. Perturbazio prozesuak VP sistemen errendimenduaren azterketa analitikoa oztopatzen duenez, data transmisio tasaren hainbat goi eta behe borne proposatu eta berrikusi dira. Borne hauetan oinarrituz, VP eta prekodi kazio linealaren arteko errendimendu desberdintasuna neurtu da hainbat aplikazio ezberdinen eremuan. Konkretuki, kanalaren ahalmen ponderatua, zerbitzu kalitatearen formulazio baten ondorioz esleitzen den seinale potentzia eta erabiltzaileen datu transmisio tasa orekatzean lortzen den errendimenduaren azterketa burutu dira. Beste alde batetik, perturbazio bektorearen kalkulu eraginkorra lortzeko metodoak ere aztertu dira. Analisi hau zuhaitz-bilaketa tekniketan oinarritzen da, non egitura sinple baten bitartez errendimendu ia optimoa lortzen den. Ildo horretan, hiru zuhaitz-bilaketa algoritmo proposatu dira. Alde batetik, Fixed-sphere encoder-aren (FSE) konplexutasun konstateak eta arkitektura ez errekurtsiboak datu prozesaketa abiadura handiak lortzea ahalbidetzen dute. Sequential best-node expansion (SBE) delako algoritmo iteratiboak ordea, distantzia kontrol politika baten bitartez metrika kalkuluen kopurua murriztea lortzen du. Azkenik, low-complexity SBE (LC-SBE) algoritmoak SBE metodoaren latentzia eta konplexutasuna murriztea lortzen du ordezko distantzien kalkuluari eta exekuzio iraupenean ezarritako muga aldakorreko metodo berri bati esker. Honetaz gain, prekodi kazio sistementzako zuhaitz-bilaketa algoritmo ez errekurtsiboen hardware inplementazioa garatu da. Zehazki, konplexutasun nkoko FSE eta K-Best algoritmoen arkitektura diseinua eta hardware baliabideen erabilera landu dira. Balio konplexuko nodoen sekuentzia ordenatua, Schnorr-Euchner zerrendapena bezala ezagutua, funtsezkoa da zuhaitz bilaketan erabiliko diren nodoen aukeraketa egiteko. Prozesu honek beharrezkoak dituen hardware baliabideen eskaera murrizteko, konplexutasun bajuko algoritmo ez sekuentzial bat proposatzen da. Metodo honen bitartez, sekuentzia ordenatuko edozein nodoren aukeraketa independenteki egin ahal da. Proposatutako zerrendapen metodoa eta estruktura fully-pipeline baten bitartez, 5 Gbps-ko datu prozesaketa abiadura lortu daiteke FSE eta K-Best delako algoritmoen inplementazioan.La demanda de comunicaciones de alta velocidad requeridas por las aplicaciones más vanguardistas ha impuesto una presión sobre el actualmente saturado espectro inalámbrico. La incorporación de arrays de antenas en ambos extremos del enlace de comunicación ha proporcionado una mayor e ciencia espectral y abilidad al inherentemente complejo entorno inalámbrico, permitiendo así el desarrollo de aplicaciones de alta velocidad de transmisión sin un consumo adicional de ancho de banda. Consecuentemente, los sistemas multiple-input multiple output (MIMO) se han convertido en la tecnología clave para los estándares de comunicación de banda ancha, tanto en las con guraciones de usuario único como en los entornos multiusuario. La principal di cultad presente en los sistemas MIMO de usuario único reside en la etapa de detección de la señal en el extremo receptor, mientras que los sistemas multiusuario en el canal de bajada se enfrentan al reto de habilitar la adquisición de datos no cooperativa en los terminales receptores. A tal efecto, las técnicas de precodi cación realizan una etapa de pre-ecualización en la estación base de tal manera que la señal en cada receptor se pueda interpretar independientemente y sin el conocimiento del estado general del canal. La precodifi cación vectorial (VP, del inglés vector precoding) se ha propuesto recientemente para la adquisición no cooperativa de la señal en el canal de difusión multiusuario. La principal ventaja de la incorporación de un vector de perturbación es una considerable mejora en el rendimiento con respecto a los métodos de precodi cación lineales. Sin embargo, la adquisición de la señal de perturbación implica la búsqueda del punto más cercano en un reticulado in nito. Este problema se considera de complejidad no determinística en tiempo polinomial o NP-complejo. Esta tesis aborda las di cultades que se derivan del proceso de perturbación en sistemas VP desde una perspectiva tanto teórica como práctica. Por un lado, se analiza el rendimiento de VP asumiendo una decodi cación óptima en escenarios de alta relación señal a ruido. Debido a que el proceso de perturbación di culta la evaluación analítica del rendimiento de los sistemas de VP, se proponen y revisan diversas cotas superiores e inferiores en la tasa esperada de transmisión de estos sistemas. En base a estas cotas, se realiza una comparación de VP con respecto a la precodi cación lineal en el ámbito de la capacidad suma ponderada, la potencia resultante de una formulación de calidad de servicio y el rendimiento obtenido al equilibrar las tasas de transmisión de los usuarios. Por otro lado, se han propuesto nuevos procedimientos para un cómputo e ciente del vector de perturbación. Estos métodos se basan en técnicas de búsqueda en árbol que, por medio de diferentes políticas de podado, reducen la complejidad derivada de una búsqueda exhaustiva y obtienen un rendimiento cercano al óptimo. A este respecto, se proponen tres algoritmos de búsqueda en árbol. El xed-sphere encoder (FSE) cuenta con una complejidad constante y una arquitectura no iterativa, lo que permite el procesamiento paralelo de varios vectores candidatos, lo que a su vez deriva en grandes velocidades de procesamiento de datos. El algoritmo iterativo denominado sequential best-node expansion (SBE) aplica una política de control de distancias para reducir la cantidad de cómputo de métricas realizadas durante la búsqueda en árbol. Por último, el low-complexity SBE (LC-SBE) tiene por objetivo reducir la complejidad y latencia del algoritmo anterior mediante la combinación de un modelo de cálculo aproximado de distancias y una estrategia novedosa de restricción variable del tiempo de ejecución. Adicionalmente, se analiza la implementación en hardware de algoritmos de búsqueda en árbol no iterativos para los escenarios de precodi cación. Más especí camente, se presentan el diseño de la arquitectura y la ocupación de recursos de hardware de las técnicas de complejidad ja FSE y K-Best. La determinación de la secuencia ordenada de nodos de naturaleza compleja, también conocida como la enumeración de Schnorr-Euchner, es vital para seleccionar los nodos evaluados durante la búsqueda en árbol. Con la intención de reducir al mínimo la demanda de recursos de hardware de esta tarea de alta carga computacional, se presenta un novedoso algoritmo no secuencial de baja complejidad que permite la selección independiente de los nodos dentro de la secuencia ordenada. La incorporación de la técnica de enumeración no secuencial junto con la arquitectura fully-pipeline de los algoritmos FSE y K-Best, permite alcanzar velocidades de procesamiento de datos de hasta 5 Gbps para un sistema de 4 antenas receptoras

    Mobile and Wireless Communications

    Get PDF
    Mobile and Wireless Communications have been one of the major revolutions of the late twentieth century. We are witnessing a very fast growth in these technologies where mobile and wireless communications have become so ubiquitous in our society and indispensable for our daily lives. The relentless demand for higher data rates with better quality of services to comply with state-of-the art applications has revolutionized the wireless communication field and led to the emergence of new technologies such as Bluetooth, WiFi, Wimax, Ultra wideband, OFDMA. Moreover, the market tendency confirms that this revolution is not ready to stop in the foreseen future. Mobile and wireless communications applications cover diverse areas including entertainment, industrialist, biomedical, medicine, safety and security, and others, which definitely are improving our daily life. Wireless communication network is a multidisciplinary field addressing different aspects raging from theoretical analysis, system architecture design, and hardware and software implementations. While different new applications are requiring higher data rates and better quality of service and prolonging the mobile battery life, new development and advanced research studies and systems and circuits designs are necessary to keep pace with the market requirements. This book covers the most advanced research and development topics in mobile and wireless communication networks. It is divided into two parts with a total of thirty-four stand-alone chapters covering various areas of wireless communications of special topics including: physical layer and network layer, access methods and scheduling, techniques and technologies, antenna and amplifier design, integrated circuit design, applications and systems. These chapters present advanced novel and cutting-edge results and development related to wireless communication offering the readers the opportunity to enrich their knowledge in specific topics as well as to explore the whole field of rapidly emerging mobile and wireless networks. We hope that this book will be useful for students, researchers and practitioners in their research studies

    A suite of quantum algorithms for the shortestvector problem

    Get PDF
    Crytography has come to be an essential part of the cybersecurity infrastructure that provides a safe environment for communications in an increasingly connected world. The advent of quantum computing poses a threat to the foundations of the current widely-used cryptographic model, due to the breaking of most of the cryptographic algorithms used to provide confidentiality, authenticity, and more. Consequently a new set of cryptographic protocols have been designed to be secure against quantum computers, and are collectively known as post-quantum cryptography (PQC). A forerunner among PQC is lattice-based cryptography, whose security relies upon the hardness of a number of closely related mathematical problems, one of which is known as the shortest vector problem (SVP). In this thesis I describe a suite of quantum algorithms that utilize the energy minimization principle to attack the shortest vector problem. The algorithms outlined span the gate-model and continuous time quantum computing, and explore methods of parameter optimization via variational methods, which are thought to be effective on near-term quantum computers. The performance of the algorithms are analyzed numerically, analytically, and on quantum hardware where possible. I explain how the results obtained in the pursuit of solving SVP apply more broadly to quantum algorithms seeking to solve general real-world problems; minimize the effect of noise on imperfect hardware; and improve efficiency of parameter optimization.Open Acces

    Semantic-Preserving Transformations for Stream Program Orchestration on Multicore Architectures

    Get PDF
    Because the demand for high performance with big data processing and distributed computing is increasing, the stream programming paradigm has been revisited for its abundance of parallelism in virtue of independent actors that communicate via data channels. The synchronous data-flow (SDF) programming model is frequently adopted with stream programming languages for its convenience to express stream programs as a set of nodes connected by data channels. Static data-rates of SDF programming model enable program transformations that greatly improve the performance of SDF programs on multicore architectures. The major application domain is for SDF programs are digital signal processing, audio, video, graphics kernels, networking, and security. This thesis makes the following three contributions that improve the performance of SDF programs: First, a new intermediate representation (IR) called LaminarIR is introduced. LaminarIR replaces FIFO queues with direct memory accesses to reduce the data communication overhead and explicates data dependencies between producer and consumer nodes. We provide transformations and their formal semantics to convert conventional, FIFO-queue based program representations to LaminarIR. Second, a compiler framework to perform sound and semantics-preserving program transformations from FIFO semantics to LaminarIR. We employ static program analysis to resolve token positions in FIFO queues and replace them by direct memory accesses. Third, a communication-cost-aware program orchestration method to establish a foundation of LaminarIR parallelization on multicore architectures. The LaminarIR framework, which consists of the aforementioned contributions together with the benchmarks that we used with the experimental evaluation, has been open-sourced to advocate further research on improving the performance of stream programming languages

    On multiple-antenna communications: signal detection, error exponent and and quality of service

    Get PDF
    Motivated by the demand of increasing data rate in wireless communication, multiple-antenna communication is becoming a key technology in the next generation wireless system. This dissertation considers three different aspects of multipleantenna communication. The first part is signal detection in the multiple-input multiple-output (MIMO) communication. Some low complexity near optimal detectors are designed based on an improved version of Bell Laboratories Layered Space-Time (BLAST) architecture detection and an iterative space alternating generalized expectation-maximization (SAGE) algorithm. The proposed algorithms can almost achieve the performance of optimal maximum likelihood detection. Signal detections without channel knowledge (noncoherent) and with co-channel interference are also investigated. Novel solutions are proposed with near optimal performance. Secondly, the error exponent of the distributed multiple-antenna communication (relay) in the windband regime is computed. Optimal power allocation between the source and relay node, and geometrical relay node placement are investigated based on the error exponent analysis. Lastly, the quality of service (QoS) of MIMO/single-input single- output(SISO) communication is studied. The tradeoff of the end-to-end distortion and transmission buffer delay is derived. Also, the SNR exponent of the distortion is computed for MIMO communication, which can provide some insights of the interplay among time diversity, space diversity and the spatial multiplex gain

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF
    • …
    corecore