124 research outputs found

    A Robust Fault-Tolerant and Scalable Cluster-wide Deduplication for Shared-Nothing Storage Systems

    Full text link
    Deduplication has been largely employed in distributed storage systems to improve space efficiency. Traditional deduplication research ignores the design specifications of shared-nothing distributed storage systems such as no central metadata bottleneck, scalability, and storage rebalancing. Further, deduplication introduces transactional changes, which are prone to errors in the event of a system failure, resulting in inconsistencies in data and deduplication metadata. In this paper, we propose a robust, fault-tolerant and scalable cluster-wide deduplication that can eliminate duplicate copies across the cluster. We design a distributed deduplication metadata shard which guarantees performance scalability while preserving the design constraints of shared- nothing storage systems. The placement of chunks and deduplication metadata is made cluster-wide based on the content fingerprint of chunks. To ensure transactional consistency and garbage identification, we employ a flag-based asynchronous consistency mechanism. We implement the proposed deduplication on Ceph. The evaluation shows high disk-space savings with minimal performance degradation as well as high robustness in the event of sudden server failure.Comment: 6 Pages including reference

    VehicleSense: Transportation Mode Detection Using Sound Data with an Acceleromter-based Trigger System

    Get PDF
    Department of Computer EngineeringA new transportation mode recognition system for smartphones, VehicleSense that is widely applicable to mobile context-aware services is proposed. VehicleSense aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured while being on candidate transportations in a unique way. To attain the high energy efficiency, VehicleSense adopts hierarchical accelerometer-based triggers that minimize the activation of the built-in microphone of smartphones. Further, to attain the high accuracy and the low latency, VehicleSense manipulates the sampled sound with non-linear filters that are shown to lead to substantial performance improvement. Our 186-hour log of sound and accelerometer data collected by seven different Android smartphone models confirms that VehicleSense shows 98.2% of recognition accuracy with only 0.6 seconds of latency, while consuming only about 26.1 mW on average for all day monitoring.ope

    Enabling Deep Neural Network Inferences on Resource-constraint Devices

    Get PDF
    Department of Computer Science and EngineeringWhile deep neural networks (DNN) are widely used on various devices, including resource-constraint devices such as IoT, AR/VR, and mobile devices, running DNN from resource-constrained devices remains challenging. There exist three approaches for DNN inferences on resource-constraint devices: 1) lightweight DNN for on-device computing, 2) offloading DNN inferences to a cloud server, and 3) split computing to utilize computation and network resources efficiently. Designing a lightweight DNN without compromising the accuracy of DNN is challenging due to a trade-off between latency and accuracy, that more computation is required to achieve higher accuracy. One solution to overcome this challenge is pre-processing to extract and transfer helpful information to achieve high accuracy of DNN. We design the pre-processing, which consists of three processes. The first process of pre-processing is finding out the best input source. The second process is the input-processing which extracts and contains important information for DNN inferences among the whole information gained from the input source. The last process is choosing or designing a suitable lightweight DNN for processed input. As an instance of how to apply the pre-processing, in Sec 2, we present a new transportation mode recognition system for smartphones called DeepVehicleSense, which aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured from the built-in microphone while being on candidate transportations. To achieve high accuracy and low latency, DeepVehicleSense makes use of non-linear filters that can best extract the transportation sound samples. For the recognition of five different transportation modes, we design a deep learning-based sound classifier using a novel deep neural network architecture with multiple branches. Our staged inference technique can significantly reduce runtime and energy consumption while maintaining high accuracy for the majority of samples. Offloading DNN inferences to a server is a solution for DNN inferences on resource-constraint devices, but there is one concern about latency caused by data transmission. To reduce transmission latency, recent studies have tried to make this offloading process more efficient by compressing data to be offloaded. However, conventional compression techniques are designed for human beings, so they compress data to be possible to restore data, which looks like the original from the perspective of human eyes. As a result, the compressed data through the compression technique contains redundancy beyond the necessary information for DNN inference. In other words, the most fundamental question on extracting and offloading the minimal amount of necessary information that does not degrade the inference accuracy has remained unanswered. To answer the question, in Sec 3, we call such an ideal offloading semantic offloading and propose N-epitomizer, a new offloading framework that enables semantic offloading, thus achieving more reliable and timely inferences in highly-fluctuated or even low-bandwidth wireless networks. To realize N-epitomizer, we design an autoencoder-based scalable encoder trained to extract the most informative data and scale its output size to meet the latency and accuracy requirements of inferences over a network. Even though our proposed lightweight DNN and offloading framework with the essential information extractor achieve low latency while preserving DNN performance, they alone cannot realize latency-guaranteed DNN inferences. To realize latency-guaranteed DNN inferences, the computational complexity of the lightweight DNN and the compression performance of the encoder for offloading should be adaptively selected according to current computation resources and network conditions by utilizing the DNN's trade-off between computational complexity and DNN performance and the encoder's trade-off between compression performance and DNN performance. To this end, we propose a new framework for latency-guaranteed DNN inferences called LG-DI, which predicts DNN performance degradation given a latency budget in advance and utilizes the better method between the lightweight DNN and offloading with compression. As a result, our proposed framework for DNN inferences can guarantee latency regardless of changes in computation and network resources while maintaining DNN performance as much as possible.ope

    Learning to Forget for Meta-Learning

    Full text link
    Few-shot learning is a challenging problem where the goal is to achieve generalization from only few examples. Model-agnostic meta-learning (MAML) tackles the problem by formulating prior knowledge as a common initialization across tasks, which is then used to quickly adapt to unseen tasks. However, forcibly sharing an initialization can lead to conflicts among tasks and the compromised (undesired by tasks) location on optimization landscape, thereby hindering the task adaptation. Further, we observe that the degree of conflict differs among not only tasks but also layers of a neural network. Thus, we propose task-and-layer-wise attenuation on the compromised initialization to reduce its influence. As the attenuation dynamically controls (or selectively forgets) the influence of prior knowledge for a given task and each layer, we name our method as L2F (Learn to Forget). The experimental results demonstrate that the proposed method provides faster adaptation and greatly improves the performance. Furthermore, L2F can be easily applied and improve other state-of-the-art MAML-based frameworks, illustrating its simplicity and generalizability.Comment: CVPR 2020. Code at https://github.com/baiksung/L2

    When Do Firms Add Digital Platforms? Organizational Status as an Enabler to Incumbents’ Platformization

    Get PDF
    Prior research has expanded our understanding of the platform business and its success factors, but scant attention has been paid to the launch of digital platforms by “pipeline” firms. Our study examines the effect of a firm’s status on the strategic decision to launch a digital platform and its consequences. By analyzing panel data of Fortune China 500 companies, we found that high-status incumbents are more likely to add a digital platform than their low-status counterparts, indicating that status can be seen as a promoter of launching digital platforms. However, once a digital platform is added, high-status firms are slower in improving performance than their low-status counterparts. Thus, status may serve as an inhibitor of a firm’s dedication to the new platform business. This research contributes to our understanding of the social contingency of digital transformation and the important constraints that must be overcome for incumbent firms to successfully transit

    Scene-Adaptive Video Frame Interpolation via Meta-Learning

    Full text link
    Video frame interpolation is a challenging problem because there are different scenarios for each video depending on the variety of foreground and background motion, frame rate, and occlusion. It is therefore difficult for a single network with fixed parameters to generalize across different videos. Ideally, one could have a different network for each scenario, but this is computationally infeasible for practical applications. In this work, we propose to adapt the model to each video by making use of additional information that is readily available at test time and yet has not been exploited in previous works. We first show the benefits of `test-time adaptation' through simple fine-tuning of a network, then we greatly improve its efficiency by incorporating meta-learning. We obtain significant performance gains with only a single gradient update without any additional parameters. Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.Comment: CVPR 202

    Development of Compact and High-efficient Scroll Compressor with Novel Bearing Structure

    Get PDF
    High-Side Shell(HSS) scroll compressors have been widely used for Variable Refrigerant Flow(VRF) system which is a powerful solution for the cooling and heating of commercial buildings. In order to improve the characteristics of the VRF system, a new HSS scroll compressor has been developed with a novel bearing structure. The core elements of the novel bearing structure are an outer-type bearing mounted on an orbiting scroll and a female-type eccentric journal inside of a shaft. The outer-type bush bearing which is made of engineering plastic without a back steel layer has been newly developed. The new HSS scroll compressor employing the novel bearing structure has a compact size, high efficiency, and low noise level compared to a conventional HSS scroll compressor. In order to confirm the advantages of the new HSS scroll compressor, basic tests and theoretical analysis have been performed in this study
    corecore