340 research outputs found

    On privacy in home automation systems

    Get PDF
    Home Automation Systems (HASs) are becoming increasingly popular in newly built as well as existing properties. While offering increased living comfort, resource saving features and other commodities, most current commercial systems do not protect sufficiently against passive attacks. In this thesis we investigate privacy aspects of Home Automation Systems. We analyse the threats of eavesdropping and traffic analysis attacks, demonstrating the risks of virtually undetectable privacy violations. By taking aspects of criminal and data protection law into account, we give an interdisciplinary overview of privacy risks and challenges in the context of HASs. We present the first framework to formally model privacy guarantees of Home Automation Systems and apply it to two different dummy traffic generation schemes. In a qualitative and quantitative study of these two algorithms, we show how provable privacy protection can be achieved and how privacy and energy efficiency are interdependent. This allows manufacturers to design and build secure Home Automation Systems which protect the users' privacy and which can be arbitrarily tuned to strike a compromise between privacy protection and energy efficiency.Hausautomationssysteme (HAS) gewinnen sowohl im Bereich der Neubauten als auch bei Bestandsimmobilien stetig an Beliebtheit. WĂ€hrend sie den Wohnkomfort erhöhen, Einsparpotential fĂŒr Strom und Wasser sowie weitere VorzĂŒge bieten, schĂŒtzen aktuelle Systeme nicht ausreichend vor passiven Angriffen. In dieser Arbeit untersuchen wir Aspekte des Datenschutzes von Hausautomationssystemen. Wir betrachten die Gefahr des Abfangens von Daten sowie der Verkehrsanalyse und zeigen die Risiken auf, welche sich durch praktisch unsichtbare Angriffe fĂŒr Nutzende ergeben. Die Betrachtung straf- und datenschutzrechtlicher Aspekte ermöglicht einen interdisziplinĂ€ren Überblick ĂŒber Datenschutzrisiken im Kontext von HAS. Wir stellen das erste Rahmenwerk zur formellen Modellierung von Datenschutzgarantien in Hausautomationssystemen vor und demonstrieren die Anwendung an zwei konkreten Verfahren zur Generierung von Dummy-Verkehr. In einer qualitativen und quantitativen Studie der zwei Algorithmen zeigen wir, wie Datenschutzgarantien erreicht werden können und wie sie mit der Energieeffizienz von HAS zusammenhĂ€ngen. Dies erlaubt Herstellern die Konzeption und Umsetzung von Hausautomationssystemen, welche die PrivatsphĂ€re der Nutzenden schĂŒtzen und die eine freie Parametrisierung ermöglichen, um einen Kompromiss zwischen Datenschutz und Energieeffizienz zu erreichen

    Towards Optimal Pre-processing in Leakage Detection

    Get PDF
    An attacker or evaluator can detect more information leakages if he improves the Signal-to-Noise Ratio (SNR) of power traces in his tests. For this purpose, pre-processings such as de-noise, distribution-based traces biasing are used. However, the existing traces biasing schemes can\u27t accurately express the characteristics of power traces with high SNR, making them not ideal for leakage detections. Moreover, if the SNR of power traces is very low, it is very difficult to use the existing de-noise schemes and traces biasing schemes to enhance leakage detection. In this paper, a known key based pre-processing tool named Traces Linear Optimal Biasing (TLOB) is proposed, which performs very well even on power traces with very low SNR. It can accurately evaluate the noise of time samples and give reliable traces optimal biasing. Experimental results show that TLOB significantly reduces number of traces used for detection; correlation coefficients in ρ\rho-tests using TLOB approach 1.00, thus the confidence of tests is significantly improved. As far as we know, there is no pre-processing tool more efficient than TLOB. TLOB is very simple, and only brings very limited time and memory consumption. We strongly recommend to use it to pre-process traces in side channel evaluations

    Profiling Good Leakage Models For Masked Implementations

    Get PDF
    Leakage model plays a very important role in side channel attacks. An accurate leakage model greatly improves the efficiency of attacks. However, how to profile a good enough leakage model, or how to measure the accuracy of a leakage model, is seldom studied. Durvaux et al. proposed leakage certification tests to profile good enough leakage model for unmasked implementations. However, they left the leakage model profiling for protected implementations as an open problem. To solve this problem, we propose the first practical higher-order leakage model certification tests for masked implementations. First and second order attacks are performed on the simulations of serial and parallel implementations of a first-order fixed masking. A third-order attack is performed on another simulation of a second-order random masked implementation. The experimental results show that our new tests can profile the leakage models accurately

    An Unsupervised Cluster: Learning Water Customer Behavior Using Variation of Information on a Reconstructed Phase Space

    Get PDF
    The unsupervised clustering algorithm described in this dissertation addresses the need to divide a population of water utility customers into groups based on their similarities and differences, using only the measured flow data collected by water meters. After clustering, the groups represent customers with similar consumption behavior patterns and provide insight into ‘normal’ and ‘unusual’ customer behavior patterns. This research focuses upon individually metered water utility customers and includes both residential and commercial customer accounts serviced by utilities within North America. The contributions of this dissertation not only represent a novel academic work, but also solve a practical problem for the utility industry. This dissertation introduces a method of agglomerative clustering using information theoretic distance measures on Gaussian mixture models within a reconstructed phase space. The clustering method accommodates a utility’s limited human, financial, computational, and environmental resources. The proposed weighted variation of information distance measure for comparing Gaussian mixture models places emphasis upon those behaviors whose statistical distributions are more compact over those behaviors with large variation and contributes a novel addition to existing comparison options

    Méthodes logicielles formelles pour la sécurité des implémentations cryptographiques

    Get PDF
    Implementations of cryptosystems are vulnerable to physical attacks, and thus need to be protected against them.Of course, malfunctioning protections are useless.Formal methods help to develop systems while assessing their conformity to a rigorous specification.The first goal of my thesis, and its innovative aspect, is to show that formal methods can be used to prove not only the principle of the countermeasures according to a model,but also their implementations, as it is where the physical vulnerabilities are exploited.My second goal is the proof and the automation of the protection techniques themselves, because handwritten security code is error-prone.Les implĂ©mentations cryptographiques sont vulnĂ©rables aux attaques physiques, et ont donc besoin d'en ĂȘtre protĂ©gĂ©es.Bien sĂ»r, des protections dĂ©fectueuses sont inutiles.L'utilisation des mĂ©thodes formelles permet de dĂ©velopper des systĂšmes tout en garantissant leur conformitĂ© Ă  des spĂ©cifications donnĂ©es.Le premier objectif de ma thĂšse, et son aspect novateur, est de montrer que les mĂ©thodes formelles peuvent ĂȘtre utilisĂ©es pour prouver non seulement les principes des contre-mesures dans le cadre d'un modĂšle, mais aussi leurs implĂ©mentations, Ă©tant donnĂ© que c'est lĂ  que les vulnĂ©rabilitĂ©s physiques sont exploitĂ©es.Mon second objectif est la preuve et l'automatisation des techniques de protection elles-mĂȘme, car l'Ă©criture manuelle de code est sujette Ă  de nombreuses erreurs, particuliĂšrement lorsqu'il s'agit de code de sĂ©curitĂ©

    Robustness Analysis of Controllable-Polarity Silicon Nanowire Devices and Circuits

    Get PDF
    Substantial downscaling of the feature size in current CMOS technology has confronted digital designers with serious challenges including short channel effect and high amount of leakage power. To address these problems, emerging nano-devices, e.g., Silicon NanoWire FET (SiNWFET), is being introduced by the research community. These devices keep on pursuing MooreĂąs Law by improving channel electrostatic controllability, thereby reducing the Off Ăąstate leakage current. In addition to these improvements, recent developments introduced devices with enhanced capabilities, such as Controllable-Polarity (CP) SiNWFETs, which make them very interesting for compact logic cell and arithmetic circuits. At advanced technology nodes, the amount of physical controls, during the fabrication process of nanometer devices, cannot be precisely determined because of technology fluctuations. Consequently, the structural parameters of fabricated circuits can be significantly different from their nominal values. Moreover, giving an a-priori conclusion on the variability of advanced technologies for emerging nanoscale devices, is a difficult task and novel estimation methodologies are required. This is a necessity to guarantee the performance and the reliability of future integrated circuits. Statistical analysis of process variation requires a great amount of numerical data for nanoscale devices. This introduces a serious challenge for variability analysis of emerging technologies due to the lack of fast simulation models. One the one hand, the development of accurate compact models entails numerous tests and costly measurements on fabricated devices. On the other hand, Technology Computer Aided Design (TCAD) simulations, that can provide precise information about devices behavior, are too slow to timely generate large enough data set. In this research, a fast methodology for generating data set for variability analysis is introduced. This methodology combines the TCAD simulations with a learning algorithm to alleviate the time complexity of data set generation. Another formidable challenge for variability analysis of the large circuits is growing number of process variation sources. Utilizing parameterized models is becoming a necessity for chip design and verification. However, the high dimensionality of parameter space imposes a serious problem. Unfortunately, the available dimensionality reduction techniques cannot be employed for three main reasons of lack of accuracy, distribution dependency of the data points, and finally incompatibility with device and circuit simulators. We propose a novel technique of parameter selection for modeling process and performance variation. The proposed technique efficiently addresses the aforementioned problems. Appropriate testing, to capture manufacturing defects, plays an important role on the quality of integrated circuits. Compared to conventional CMOS, emerging nano-devices such as CP-SiNWFETs have different fabrication process steps. In this case, current fault models must be extended for defect detection. In this research, we extracted the possible fabrication defects, and then proposed a fault model for this technology. We also provided a couple of test methods for detecting the manufacturing defects in various types of CP-SiNWFET logic gates. Finally, we used the obtained fault model to build fault tolerant arithmetic circuits with a bunch of superior properties compared to their competitors

    Privacy-preserving machine learning system at the edge

    Get PDF
    Data privacy in machine learning has become an urgent problem to be solved, along with machine learning's rapid development and the large attack surface being explored. Pre-trained deep neural networks are increasingly deployed in smartphones and other edge devices for a variety of applications, leading to potential disclosures of private information. In collaborative learning, participants keep private data locally and communicate deep neural networks updated on their local data, but still, the private information encoded in the networks' gradients can be explored by adversaries. This dissertation aims to perform dedicated investigations on privacy leakage from neural networks and to propose privacy-preserving machine learning systems for edge devices. Firstly, the systematization of knowledge is conducted to identify the key challenges and existing/adaptable solutions. Then a framework is proposed to measure the amount of sensitive information memorized in each layer's weights of a neural network based on the generalization error. Results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. To protect such sensitive information in weights, DarkneTZ is proposed as a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against neural networks. The performance of DarkneTZ is evaluated, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, model layers are partitioned into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Results show that even if a single layer is hidden, one can provide reliable model privacy and defend against state of art membership inference attacks, with only a 3% performance overhead. This thesis further strengthens investigations from neural network weights (in on-device machine learning deployment) to gradients (in collaborative learning). An information-theoretical framework is proposed, by adapting usable information theory and considering the attack outcome as a probability measure, to quantify private information leakage from network gradients. The private original information and latent information are localized in a layer-wise manner. After that, this work performs sensitivity analysis over the gradients \wrt~private information to further explore the underlying cause of information leakage. Numerical evaluations are conducted on six benchmark datasets and four well-known networks and further measure the impact of training hyper-parameters and defense mechanisms. Last but not least, to limit the privacy leakages in gradients, I propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems. TEEs are utilized on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. This work leverages greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of the implementation shows that PPFL significantly improves privacy by defending against data reconstruction, property inference, and membership inference attacks while incurring small communication overhead and client-side system overheads. This thesis offers a better understanding of the sources of private information in machine learning and provides frameworks to fully guarantee privacy and achieve comparable ML model utility and system overhead with regular machine learning framework.Open Acces

    NASA Tech Briefs, November/December 1986

    Get PDF
    Topics include: NASA TU Services; New Product Ideas; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Fabrication Technology; Machinery; Mathematics and Information Sciences; Life Sciences
    • 

    corecore