100 research outputs found

    Snevily's Conjecture about L\mathcal{L}-intersecting Families on Set Systems and its Analogue on Vector Spaces

    Full text link
    The classical Erd\H{o}s-Ko-Rado theorem on the size of an intersecting family of kk-subsets of the set [n]={1,2,,n}[n] = \{1, 2, \dots, n\} is one of the fundamental intersection theorems for set systems. After the establishment of the EKR theorem, many intersection theorems on set systems have appeared in the literature, such as the well-known Frankl-Wilson theorem, Alon-Babai-Suzuki theorem, and Grolmusz-Sudakov theorem. In 1995, Snevily proposed the conjecture that the upper bound for the size of an L\mathcal{L}-intersecting family of subsets of [n][n] is (ns){{n} \choose {s}} under the condition max{li}<min{kj}\max \{l_{i}\} < \min \{k_{j}\}, where L={l1,,ls}\mathcal{L} = \{l_{1}, \dots, l_{s}\} with 0l1<<ls0 \leq l_{1} < \cdots < l_{s} and kjk_{j} are subset sizes in the family. In this paper, we prove that Snevily's conjecture holds for n(k2l1+1)s+l1n \geq {{k^{2}} \choose {l_{1}+1}}s + l_{1}, where kk is the maximum subset size in the family. We then derive an analogous result for L\mathcal{L}-intersecting families of subspaces of an nn-dimensional vector space over a finite field Fq\mathbb{F}_{q}.Comment: arXiv admin note: text overlap with arXiv:1701.00585 by other author

    Energy efficient tdma sleep scheduling in wireless sensor networks

    Get PDF
    Abstract—Sleep scheduling is a widely used mechanism in wireless sensor networks (WSNs) to reduce the energy consumption since it can save the energy wastage caused by the idle listening state. In a traditional sleep scheduling, however, sensors have to start up numerous times in a period, and thus consume extra energy due to the state transitions. The objective of this paper is to design an energy efficient sleep scheduling for low data-rate WSNs, where sensors not only consume different amounts of energy in different states (transmit, receive, idle and sleep), but also consume energy for state transitions. We use TDMA as the MAC layer protocol, because it has the advantages of avoiding collisions, idle listening and overhearing. We first propose a novel interference-free TDMA sleep scheduling problem called contiguous link scheduling, which assigns sensors with consecutive time slots to reduce the frequency of state transitions. To tackle this problem, we then present efficient centralized and distributed algorithms that use time slots at most a constant factor of the optimum. The simulation studies corroborate the theoretical results, and show the efficiency of our proposed algorithms

    ODE: A Data Sampling Method for Practical Federated Learning with Streaming Data and Limited Buffer

    Full text link
    Machine learning models have been deployed in mobile networks to deal with the data from different layers to enable automated network management and intelligence on devices. To overcome high communication cost and severe privacy concerns of centralized machine learning, Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices. While the computation and communication limitation has been widely studied in FL, the impact of on-device storage on the performance of FL is still not explored. Without an efficient and effective data selection policy to filter the abundant streaming data on devices, classical FL can suffer from much longer model training time (more than 4×4\times) and significant inference accuracy reduction (more than 7%7\%), observed in our experiments. In this work, we take the first step to consider the online data selection for FL with limited on-device storage. We first define a new data valuation metric for data selection in FL: the projection of local gradient over an on-device data sample onto the global gradient over the data from all devices. We further design \textbf{ODE}, a framework of \textbf{O}nline \textbf{D}ata s\textbf{E}lection for FL, to coordinate networked devices to store valuable data samples collaboratively, with theoretical guarantees for speeding up model convergence and enhancing final model accuracy, simultaneously. Experimental results on one industrial task (mobile network traffic classification) and three public tasks (synthetic task, image classification, human activity recognition) show the remarkable advantages of ODE over the state-of-the-art approaches. Particularly, on the industrial dataset, ODE achieves as high as 2.5×2.5\times speedup of training time and 6%6\% increase in final inference accuracy, and is robust to various factors in the practical environment

    On-Device Model Fine-Tuning with Label Correction in Recommender Systems

    Full text link
    To meet the practical requirements of low latency, low cost, and good privacy in online intelligent services, more and more deep learning models are offloaded from the cloud to mobile devices. To further deal with cross-device data heterogeneity, the offloaded models normally need to be fine-tuned with each individual user's local samples before being put into real-time inference. In this work, we focus on the fundamental click-through rate (CTR) prediction task in recommender systems and study how to effectively and efficiently perform on-device fine-tuning. We first identify the bottleneck issue that each individual user's local CTR (i.e., the ratio of positive samples in the local dataset for fine-tuning) tends to deviate from the global CTR (i.e., the ratio of positive samples in all the users' mixed datasets on the cloud for training out the initial model). We further demonstrate that such a CTR drift problem makes on-device fine-tuning even harmful to item ranking. We thus propose a novel label correction method, which requires each user only to change the labels of the local samples ahead of on-device fine-tuning and can well align the locally prior CTR with the global CTR. The offline evaluation results over three datasets and five CTR prediction models as well as the online A/B testing results in Mobile Taobao demonstrate the necessity of label correction in on-device fine-tuning and also reveal the improvement over cloud-based learning without fine-tuning
    corecore