5,103 research outputs found

    An accurate analysis for guaranteed performance of multiprocessor streaming applications

    Get PDF
    Already for more than a decade, consumer electronic devices have been available for entertainment, educational, or telecommunication tasks based on multimedia streaming applications, i.e., applications that process streams of audio and video samples in digital form. Multimedia capabilities are expected to become more and more commonplace in portable devices. This leads to challenges with respect to cost efficiency and quality. This thesis contributes models and analysis techniques for improving the cost efficiency, and therefore also the quality, of multimedia devices. Portable consumer electronic devices should feature flexible functionality on the one hand and low power consumption on the other hand. Those two requirements are conflicting. Therefore, we focus on a class of hardware that represents a good trade-off between those two requirements, namely on domain-specific multiprocessor systems-on-chip (MP-SoC). Our research work contributes to dynamic (i.e., run-time) optimization of MP-SoC system metrics. The central question in this area is how to ensure that real-time constraints are satisfied and the metric of interest such as perceived multimedia quality or power consumption is optimized. In these cases, we speak of quality-of-service (QoS) and power management, respectively. In this thesis, we pursue real-time constraint satisfaction that is guaranteed by the system by construction and proven mainly based on analytical reasoning. That approach is often taken in real-time systems to ensure reliable performance. Therefore the performance analysis has to be conservative, i.e. it has to use pessimistic assumptions on the unknown conditions that can negatively influence the system performance. We adopt this hypothesis as the foundation of this work. Therefore, the subject of this thesis is the analysis of guaranteed performance for multimedia applications running on multiprocessors. It is very important to note that our conservative approach is essentially different from considering only the worst-case state of the system. Unlike the worst-case approach, our approach is dynamic, i.e. it makes use of run-time characteristics of the input data and the environment of the application. The main purpose of our performance analysis method is to guide the run-time optimization. Typically, a resource or quality manager predicts the execution time, i.e., the time it takes the system to process a certain number of input data samples. When the execution times get smaller, due to dependency of the execution time on the input data, the manager can switch the control parameter for the metric of interest such that the metric improves but the system gets slower. For power optimization, that means switching to a low-power mode. If execution times grow, the manager can set parameters so that the system gets faster. For QoS management, for example, the application can be switched to a different quality mode with some degradation in perceived quality. The real-time constraints are then never violated and the metrics of interest are kept as good as possible. Unfortunately, maintaining system metrics such as power and quality at the optimal level contradicts with our main requirement, i.e., providing performance guarantees, because for this one has to give up some quality or power consumption. Therefore, the performance analysis approach developed in this thesis is not only conservative, but also accurate, so that the optimization of the metric of interest does not suffer too much from conservativity. This is not trivial to realize when two factors are combined: parallel execution on multiple processors and dynamic variation of the data-dependent execution delays. We achieve the goal of conservative and accurate performance estimation for an important class of multiprocessor platforms and multimedia applications. Our performance analysis technique is realizable in practice in QoS or power management setups. We consider a generic MP-SoC platform that runs a dynamic set of applications, each application possibly using multiple processors. We assume that the applications are independent, although it is possible to relax this requirement in the future. To support real-time constraints, we require that the platform can provide guaranteed computation, communication and memory budgets for applications. Following important trends in system-on-chip communication, we support both global buses and networks-on-chip. We represent every application as a homogeneous synchronous dataflow (HSDF) graph, where the application tasks are modeled as graph nodes, called actors. We allow dynamic datadependent actor execution delays, which makes HSDF graphs very useful to express modern streaming applications. Our reason to consider HSDF graphs is that they provide a good basic foundation for analytical performance estimation. In this setup, this thesis provides three major contributions: 1. Given an application mapped to an MP-SoC platform, given the performance guarantees for the individual computation units (the processors) and the communication unit (the network-on-chip), and given constant actor execution delays, we derive the throughput and the execution time of the system as a whole. 2. Given a mapped application and platform performance guarantees as in the previous item, we extend our approach for constant actor execution delays to dynamic datadependent actor delays. 3. We propose a global implementation trajectory that starts from the application specification and goes through design-time and run-time phases. It uses an extension of the HSDF model of computation to reflect the design decisions made along the trajectory. We present our model and trajectory not only to put the first two contributions into the right context, but also to present our vision on different parts of the trajectory, to make a complete and consistent story. Our first contribution uses the idea of so-called IPC (inter-processor communication) graphs known from the literature, whereby a single model of computation (i.e., HSDF graphs) are used to model not only the computation units, but also the communication unit (the global bus or the network-on-chip) and the FIFO (first-in-first-out) buffers that form a ‘glue’ between the computation and communication units. We were the first to propose HSDF graph structures for modeling bounded FIFO buffers and guaranteed throughput network connections for the network-on-chip communication in MP-SoCs. As a result, our HSDF models enable the formalization of the on-chip FIFO buffer capacity minimization problem under a throughput constraint as a graph-theoretic problem. Using HSDF graphs to formalize that problem helps to find the performance bottlenecks in a given solution to this problem and to improve this solution. To demonstrate this, we use the JPEG decoder application case study. Also, we show that, assuming constant – worst-case for the given JPEG image – actor delays, we can predict execution times of JPEG decoding on two processors with an accuracy of 21%. Our second contribution is based on an extension of the scenario approach. This approach is based on the observation that the dynamic behavior of an application is typically composed of a limited number of sub-behaviors, i.e., scenarios, that have similar resource requirements, i.e., similar actor execution delays in the context of this thesis. The previous work on scenarios treats only single-processor applications or multiprocessor applications that do not exploit all the flexibility of the HSDF model of computation. We develop new scenario-based techniques in the context of HSDF graphs, to derive the timing overlap between different scenarios, which is very important to achieve good accuracy for general HSDF graphs executing on multiprocessors. We exploit this idea in an application case study – the MPEG-4 arbitrarily-shaped video decoder, and demonstrate execution time prediction with an average accuracy of 11%. To the best of our knowledge, for the given setup, no other existing performance technique can provide a comparable accuracy and at the same time performance guarantees

    Towards Data-Driven Autonomics in Data Centers

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%. We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.Comment: 12 pages, 6 figure

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared

    Proposal of an adaptive infotainment system depending on driving scenario complexity

    Get PDF
    Tesi en modalitat Doctorat industrialPla de Doctorat industrial de la Generalitat de CatalunyaThe PhD research project is framed within the plan of industrial doctorates of the “Generalitat de Catalunya”. During the investigation, most of the work was carried out at the facilities of the vehicle manufacturer SEAT, specifically at the information and entertainment (infotainment) department. In the same way, there was a continuous cooperation with the telematics department of the UPC. The main objective of the project consisted in the design and validation of an adaptive infotainment system dependent on the driving complexity. The system was created with the purpose of increasing driver’ experience while guaranteeing a proper level of road safety. Given the increasing number of application and services available in current infotainment systems, it becomes necessary to devise a system capable of balancing these two counterparts. The most relevant parameters that can be used for balancing these metrics while driving are: type of services offered, interfaces available for interacting with the services, the complexity of driving and the profile of the driver. The present study can be divided into two main development phases, each phase had as outcome a real physical block that came to be part of the final system. The final system was integrated in a vehicle and validated in real driving conditions. The first phase consisted in the creation of a model capable of estimating the driving complexity based on a set of variables related to driving. The model was built by employing machine learning methods and the dataset necessary to create it was collected from several driving routes carried out by different participants. This phase allowed to create a model capable of estimating, with a satisfactory accuracy, the complexity of the road using easily extractable variables in any modern vehicle. This approach simplify the implementation of this algorithm in current vehicles. The second phase consisted in the classification of a set of principles that allow the design of the adaptive infotainment system based on the complexity of the road. These principles are defined based on previous researches undertaken in the field of usability and user experience of graphical interfaces. According to these of principles, a real adaptive infotainment system with the most commonly used functionalities; navigation, radio and media was designed and integrated in a real vehicle. The developed system was able to adapt the presentation of the content according to the estimation of the driving complexity given by the block developed in phase one. The adaptive system was validated in real driving scenarios by several participants and results showed a high level of acceptance and satisfaction towards this adaptive infotainment. As a starting point for future research, a proof of concept was carried out to integrate new interfaces into a vehicle. The interface used as reference was a Head Mounted screen that offered redundant information in relation to the instrument cluster. Tests with participants served to understand how users perceive the introduction of new technologies and how objective benefits could be blurred by initial biases.El proyecto de investigación de doctorado se enmarca dentro del plan de doctorados industriales de la Generalitat de Catalunya. Durante la investigación, la mayor parte del trabajo se llevó a cabo en las instalaciones del fabricante de vehículos SEAT, específicamente en el departamento de información y entretenimiento (infotainment). Del mismo modo, hubo una cooperación continua con el departamento de telemática de la UPC. El objetivo principal del proyecto consistió en el diseño y la validación de un sistema de información y entretenimiento adaptativo que se ajustaba de acuerdo a la complejidad de la conducción. El sistema fue creado con el propósito de aumentar la experiencia del conductor y garantizar un nivel adecuado en la seguridad vial. El proyecto surge dado el número creciente de aplicaciones y servicios disponibles en los sistemas actuales de información y entretenimiento; es por ello que se hace necesario contar con un sistema capaz de equilibrar estas dos contrapartes. Los parámetros más relevantes que se pueden usar para equilibrar estas métricas durante la conducción son: el tipo de servicios ofrecidos, las interfaces disponibles para interactuar con los servicios, la complejidad de la conducción y el perfil del conductor. El presente estudio se puede dividir en dos fases principales de desarrollo, cada fase tuvo como resultado un componente que se convirtió en parte del sistema final. El sistema final fue integrado en un vehículo y validado en condiciones reales de conducción. La primera fase consistió en la creación de un modelo capaz de estimar la complejidad de la conducción en base a un conjunto de variables relacionadas con la conducción. El modelo se construyó empleando "Machine Learning Methods" y el conjunto de datos necesario para crearlo se recopiló a partir de varias rutas de conducción realizadas por diferentes participantes. Esta fase permitió crear un modelo capaz de estimar, con una precisión satisfactoria, la complejidad de la carretera utilizando variables fácilmente extraíbles en cualquier vehículo moderno. Este enfoque simplifica la implementación de este algoritmo en los vehículos actuales. La segunda fase consistió en la clasificación de un conjunto de principios que permiten el diseño del sistema de información y entretenimiento adaptativo basado en la complejidad de la carretera. Estos principios se definen en base a investigaciones anteriores realizadas en el campo de usabilidad y experiencia del usuario con interfaces gráficas. De acuerdo con estos principios, un sistema de entretenimiento y entretenimiento real integrando las funcionalidades más utilizadas; navegación, radio y audio fue diseñado e integrado en un vehículo real. El sistema desarrollado pudo adaptar la presentación del contenido según la estimación de la complejidad de conducción dada por el bloque desarrollado en la primera fase. El sistema adaptativo fue validado en escenarios de conducción reales por varios participantes y los resultados mostraron un alto nivel de aceptación y satisfacción hacia este entretenimiento informativo adaptativo. Como punto de partida para futuras investigaciones, se llevó a cabo una prueba de concepto para integrar nuevas interfaces en un vehículo. La interfaz utilizada como referencia era una pantalla a la altura de los ojos (Head Mounted Display) que ofrecía información redundante en relación con el grupo de instrumentos. Las pruebas con los participantes sirvieron para comprender cómo perciben los usuarios la introducción de nuevas tecnologías y cómo los sesgos iniciales podrían difuminar los beneficios.Postprint (published version

    Learning workload behaviour models from monitored time-series for resource estimation towards data center optimization

    Get PDF
    In recent years there has been an extraordinary growth of the demand of Cloud Computing resources executed in Data Centers. Modern Data Centers are complex systems that need management. As distributed computing systems grow, and workloads benefit from such computing environments, the management of such systems increases in complexity. The complexity of resource usage and power consumption on cloud-based applications makes the understanding of application behavior through expert examination difficult. The difficulty increases when applications are seen as "black boxes", where only external monitoring can be retrieved. Furthermore, given the different amount of scenarios and applications, automation is required. To deal with such complexity, Machine Learning methods become crucial to facilitate tasks that can be automatically learned from data. Firstly, this thesis proposes an unsupervised learning technique to learn high level representations from workload traces. Such technique provides a fast methodology to characterize workloads as sequences of abstract phases. The learned phase representation is validated on a variety of datasets and used in an auto-scaling task where we show that it can be applied in a production environment, achieving better performance than other state-of-the-art techniques. Secondly, this thesis proposes a neural architecture, based on Sequence-to-Sequence models, that provides the expected resource usage of applications sharing hardware resources. The proposed technique provides resource managers the ability to predict resource usage over time as well as the completion time of the running applications. The technique provides lower error predicting usage when compared with other popular Machine Learning methods. Thirdly, this thesis proposes a technique for auto-tuning Big Data workloads from the available tunable parameters. The proposed technique gathers information from the logs of an application generating a feature descriptor that captures relevant information from the application to be tuned. Using this information we demonstrate that performance models can generalize up to a 34% better when compared with other state-of-the-art solutions. Moreover, the search time to find a suitable solution can be drastically reduced, with up to a 12x speedup and almost equal quality results as modern solutions. These results prove that modern learning algorithms, with the right feature information, provide powerful techniques to manage resource allocation for applications running in cloud environments. This thesis demonstrates that learning algorithms allow relevant optimizations in Data Center environments, where applications are externally monitored and careful resource management is paramount to efficiently use computing resources. We propose to demonstrate this thesis in three areas that orbit around resource management in server environmentsEls Centres de Dades (Data Centers) moderns són sistemes complexos que necessiten ser gestionats. A mesura que creixen els sistemes de computació distribuïda i les aplicacions es beneficien d’aquestes infraestructures, també n’augmenta la seva complexitat. La complexitat que implica gestionar recursos de còmput i d’energia en sistemes de computació al núvol fa difícil entendre el comportament de les aplicacions que s'executen de manera manual. Aquesta dificultat s’incrementa quan les aplicacions s'observen com a "caixes negres", on només es poden monitoritzar algunes mètriques de les caixes de manera externa. A més, degut a la gran varietat d’escenaris i aplicacions, és necessari automatitzar la gestió d'aquests recursos. Per afrontar-ne el repte, l'aprenentatge automàtic juga un paper cabdal que facilita aquestes tasques, que poden ser apreses automàticament en base a dades prèvies del sistema que es monitoritza. Aquesta tesi demostra que els algorismes d'aprenentatge poden aportar optimitzacions molt rellevants en la gestió de Centres de Dades, on les aplicacions són monitoritzades externament i la gestió dels recursos és de vital importància per a fer un ús eficient de la capacitat de còmput d'aquests sistemes. En primer lloc, aquesta tesi proposa emprar aprenentatge no supervisat per tal d’aprendre representacions d'alt nivell a partir de traces d'aplicacions. Aquesta tècnica ens proporciona una metodologia ràpida per a caracteritzar aplicacions vistes com a seqüències de fases abstractes. La representació apresa de fases és validada en diferents “datasets” i s'aplica a la gestió de tasques d'”auto-scaling”, on es conclou que pot ser aplicable en un medi de producció, aconseguint un millor rendiment que altres mètodes de vanguardia. En segon lloc, aquesta tesi proposa l'ús de xarxes neuronals, basades en arquitectures “Sequence-to-Sequence”, que proporcionen una estimació dels recursos usats per aplicacions que comparteixen recursos de hardware. La tècnica proposada facilita als gestors de recursos l’habilitat de predir l'ús de recursos a través del temps, així com també una estimació del temps de còmput de les aplicacions. Tanmateix, redueix l’error en l’estimació de recursos en comparació amb d’altres tècniques populars d'aprenentatge automàtic. Per acabar, aquesta tesi introdueix una tècnica per a fer “auto-tuning” dels “hyper-paràmetres” d'aplicacions de Big Data. Consisteix així en obtenir informació dels “logs” de les aplicacions, generant un vector de característiques que captura informació rellevant de les aplicacions que s'han de “tunejar”. Emprant doncs aquesta informació es valida que els ”Regresors” entrenats en la predicció del rendiment de les aplicacions són capaços de generalitzar fins a un 34% millor que d’altres “Regresors” de vanguàrdia. A més, el temps de cerca per a trobar una bona solució es pot reduir dràsticament, aconseguint un increment de millora de fins a 12 vegades més dels resultats de qualitat en contraposició a alternatives modernes. Aquests resultats posen de manifest que els algorismes moderns d'aprenentatge automàtic esdevenen tècniques molt potents per tal de gestionar l'assignació de recursos en aplicacions que s'executen al núvol.Arquitectura de computador

    Proposal of an adaptive infotainment system depending on driving scenario complexity

    Get PDF
    The PhD research project is framed within the plan of industrial doctorates of the “Generalitat de Catalunya”. During the investigation, most of the work was carried out at the facilities of the vehicle manufacturer SEAT, specifically at the information and entertainment (infotainment) department. In the same way, there was a continuous cooperation with the telematics department of the UPC. The main objective of the project consisted in the design and validation of an adaptive infotainment system dependent on the driving complexity. The system was created with the purpose of increasing driver’ experience while guaranteeing a proper level of road safety. Given the increasing number of application and services available in current infotainment systems, it becomes necessary to devise a system capable of balancing these two counterparts. The most relevant parameters that can be used for balancing these metrics while driving are: type of services offered, interfaces available for interacting with the services, the complexity of driving and the profile of the driver. The present study can be divided into two main development phases, each phase had as outcome a real physical block that came to be part of the final system. The final system was integrated in a vehicle and validated in real driving conditions. The first phase consisted in the creation of a model capable of estimating the driving complexity based on a set of variables related to driving. The model was built by employing machine learning methods and the dataset necessary to create it was collected from several driving routes carried out by different participants. This phase allowed to create a model capable of estimating, with a satisfactory accuracy, the complexity of the road using easily extractable variables in any modern vehicle. This approach simplify the implementation of this algorithm in current vehicles. The second phase consisted in the classification of a set of principles that allow the design of the adaptive infotainment system based on the complexity of the road. These principles are defined based on previous researches undertaken in the field of usability and user experience of graphical interfaces. According to these of principles, a real adaptive infotainment system with the most commonly used functionalities; navigation, radio and media was designed and integrated in a real vehicle. The developed system was able to adapt the presentation of the content according to the estimation of the driving complexity given by the block developed in phase one. The adaptive system was validated in real driving scenarios by several participants and results showed a high level of acceptance and satisfaction towards this adaptive infotainment. As a starting point for future research, a proof of concept was carried out to integrate new interfaces into a vehicle. The interface used as reference was a Head Mounted screen that offered redundant information in relation to the instrument cluster. Tests with participants served to understand how users perceive the introduction of new technologies and how objective benefits could be blurred by initial biases.El proyecto de investigación de doctorado se enmarca dentro del plan de doctorados industriales de la Generalitat de Catalunya. Durante la investigación, la mayor parte del trabajo se llevó a cabo en las instalaciones del fabricante de vehículos SEAT, específicamente en el departamento de información y entretenimiento (infotainment). Del mismo modo, hubo una cooperación continua con el departamento de telemática de la UPC. El objetivo principal del proyecto consistió en el diseño y la validación de un sistema de información y entretenimiento adaptativo que se ajustaba de acuerdo a la complejidad de la conducción. El sistema fue creado con el propósito de aumentar la experiencia del conductor y garantizar un nivel adecuado en la seguridad vial. El proyecto surge dado el número creciente de aplicaciones y servicios disponibles en los sistemas actuales de información y entretenimiento; es por ello que se hace necesario contar con un sistema capaz de equilibrar estas dos contrapartes. Los parámetros más relevantes que se pueden usar para equilibrar estas métricas durante la conducción son: el tipo de servicios ofrecidos, las interfaces disponibles para interactuar con los servicios, la complejidad de la conducción y el perfil del conductor. El presente estudio se puede dividir en dos fases principales de desarrollo, cada fase tuvo como resultado un componente que se convirtió en parte del sistema final. El sistema final fue integrado en un vehículo y validado en condiciones reales de conducción. La primera fase consistió en la creación de un modelo capaz de estimar la complejidad de la conducción en base a un conjunto de variables relacionadas con la conducción. El modelo se construyó empleando "Machine Learning Methods" y el conjunto de datos necesario para crearlo se recopiló a partir de varias rutas de conducción realizadas por diferentes participantes. Esta fase permitió crear un modelo capaz de estimar, con una precisión satisfactoria, la complejidad de la carretera utilizando variables fácilmente extraíbles en cualquier vehículo moderno. Este enfoque simplifica la implementación de este algoritmo en los vehículos actuales. La segunda fase consistió en la clasificación de un conjunto de principios que permiten el diseño del sistema de información y entretenimiento adaptativo basado en la complejidad de la carretera. Estos principios se definen en base a investigaciones anteriores realizadas en el campo de usabilidad y experiencia del usuario con interfaces gráficas. De acuerdo con estos principios, un sistema de entretenimiento y entretenimiento real integrando las funcionalidades más utilizadas; navegación, radio y audio fue diseñado e integrado en un vehículo real. El sistema desarrollado pudo adaptar la presentación del contenido según la estimación de la complejidad de conducción dada por el bloque desarrollado en la primera fase. El sistema adaptativo fue validado en escenarios de conducción reales por varios participantes y los resultados mostraron un alto nivel de aceptación y satisfacción hacia este entretenimiento informativo adaptativo. Como punto de partida para futuras investigaciones, se llevó a cabo una prueba de concepto para integrar nuevas interfaces en un vehículo. La interfaz utilizada como referencia era una pantalla a la altura de los ojos (Head Mounted Display) que ofrecía información redundante en relación con el grupo de instrumentos. Las pruebas con los participantes sirvieron para comprender cómo perciben los usuarios la introducción de nuevas tecnologías y cómo los sesgos iniciales podrían difuminar los beneficios

    DeepSecure: Scalable Provably-Secure Deep Learning

    Get PDF
    This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting. DeepSecure targets scenarios in which neither of the involved parties including the cloud servers that hold the DL model parameters or the delegating clients who own the data is willing to reveal their information. Our framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency. The secure DL computation in DeepSecure is performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized realization of various components used in DL. Our optimized implementation achieves more than 58-fold higher throughput per sample compared with the best-known prior solution. In addition to our optimized GC realization, we introduce a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of deep learning. Extensive evaluations of various DL applications demonstrate up to two orders-of-magnitude additional runtime improvement achieved as a result of our pre-processing methodology. This paper also provides mechanisms to securely delegate GC computations to a third party in constrained embedded settings

    Worst-case temporal analysis of real-time dynamic streaming applications

    Get PDF
    corecore