13 research outputs found

    NoPeek: Information leakage reduction to share activations in distributed deep learning

    No full text
    For distributed machine learning with sensitive data, we demonstrate how minimizing distance correlation between raw data and intermediary representations reduces leakage of sensitive raw data patterns across client communications while maintaining model accuracy. Leakage (measured using distance correlation between input and intermediate representations) is the risk associated with the invertibility of raw data from intermediary representations. This can prevent client entities that hold sensitive data from using distributed deep learning services. We demonstrate that our method is resilient to such reconstruction attacks and is based on reduction of distance correlation between raw data and learned representations during training and inference with image datasets. We prevent such reconstruction of raw data while maintaining information required to sustain good classification accuracies

    LocFedMix-SL:localize, federate, and mix for improved scalability, convergence, and latency in split learning

    No full text
    Abstract Split learning (SL) is a promising distributed learning framework that enables to utilize the huge data and parallel computing resources of mobile devices. SL is built upon a model-split architecture, wherein a server stores an upper model segment that is shared by different mobile clients storing its lower model segments. Without exchanging raw data, SL achieves high accuracy and fast convergence by only uploading smashed data from clients and downloading global gradients from the server. Nonetheless, the original implementation of SL sequentially serves multiple clients, incurring high latency with many clients. A parallel implementation of SL has great potential in reducing latency, yet existing parallel SL algorithms resort to compromising scalability and/or convergence speed. Motivated by this, the goal of this article is to develop a scalable parallel SL algorithm with fast convergence and low latency. As a first step, we identify that the fundamental bottleneck of existing parallel SL comes from the model-split and parallel computing architectures, under which the server-client model updates are often imbalanced, and the client models are prone to detach from the server’s model. To fix this problem, by carefully integrating local parallelism, federated learning, and mixup augmentation techniques, we propose a novel parallel SL framework, coined LocFedMix-SL. Simulation results corroborate that LocFedMix-SL achieves improved scalability, convergence speed, and latency, compared to sequential SL as well as the state-of-the-art parallel SL algorithms such as SplitFed and LocSplitFed

    Advances and open problems in federated learning

    No full text
    Abstract Federated learning (FL) is a machine learning setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this monograph discusses recent advances and presents an extensive collection of open problems and challenges

    Liquid crystal display and organic light-emitting diode display: present status and future perspectives

    No full text
    Recently, ‘Liquid crystal display (LCD) vs. organic light-emitting diode (OLED) display: who wins?’ has become a topic of heated debate. In this review, we perform a systematic and comparative study of these two flat panel display technologies. First, we review recent advances in LCDs and OLEDs, including material development, device configuration and system integration. Next we analyze and compare their performances by six key display metrics: response time, contrast ratio, color gamut, lifetime, power efficiency, and panel flexibility. In this section, we focus on two key parameters: motion picture response time (MPRT) and ambient contrast ratio (ACR), which dramatically affect image quality in practical application scenarios. MPRT determines the image blur of a moving picture, and ACR governs the perceived image contrast under ambient lighting conditions. It is intriguing that LCD can achieve comparable or even slightly better MPRT and ACR than OLED, although its response time and contrast ratio are generally perceived to be much inferior to those of OLED. Finally, three future trends are highlighted, including high dynamic range, virtual reality/augmented reality and smart displays with versatile functions
    corecore