491 research outputs found

    Bitsliced Implementations of the PRINCE, LED and RECTANGLE Block Ciphers on AVR 8-bit Microcontrollers

    Get PDF
    Due to the demand for low-cost cryptosystems from industry, there spring up a lot of lightweight block ciphers which are excellent for some different implementation features. An innovative design is the block cipher PRINCE. To meet the requirement for low-latency and instantaneously encryption, NXP Semiconductors and its academic partners cooperate and design the low-latency block cipher PRINCE. Another good example is the block cipher LED which is very compact in hardware, and whose designers also aim to maintain a reasonable software performance. In this paper, we demonstrate how to achieve high software performance of these two ciphers on the AVR 8-bit microcontrollers using bitslice technique. Our bitsliced implementations speed up the execution of these two ciphers several times with less memory usage than previous work. In addition to these two nibble-oriented ciphers, we also evaluate the software performance of a newly proposed lightweight block cipher RECTANGLE, whose design takes bitslicing into consider. Our results show that RECTANGLE has very high performance ranks among the existing block ciphers on 8-bit microcontrollers in the real-world usage scenarios

    Wave-graphene: a full-auxetic carbon semiconductor with high flexibility and optical UV absorption

    Full text link
    The abundant bonding possibilities of Carbon stimulate the design of numerous carbon allotropes, promising the foundation for exploring structure-functionality relationships. Herein, utilizing the space bending strategy, we successfully engineered a two-dimensional carbon allotrope with pure sp2 hybridization, named "Wave-graphene" from the unique wave-like ripple structure. The novel Wave-graphene exhibits full-auxetic behavior due to anisotropic mechanical response, possessing both negative and zero Poisson's ratios. The fundamental mechanism can be attributed to the fact that highly buckled out-of-plane structures lead to anisotropic responses of in-plane nonlinear interactions, which further lead to anisotropy of lattice vibrations. In addition, Wave-graphene is found having quasi-direct wide bandgap of 2.01 eV, the excellent optical transparency and the high flexibility. The successful design of Wave-graphene with excellent outstanding multifunctional properties shows that the utilization of space bending strategies can provide more degrees of freedom for designing novel materials, further enriching the carbon material family and supplementing its versatility

    Rho GTPase Signaling Activates Microtubule Severing to Promote Microtubule Ordering in Arabidopsis

    Get PDF
    SummaryBackgroundOrdered cortical microtubule (MT) arrays play a critical role in the spatial control of cell division and expansion and are essential for plant growth, morphogenesis, and development. Various developmental, hormonal, and mechanical signals and a large number of MT-associated proteins are known to impact cortical MT organization, but the underlying mechanisms remain poorly understood. Our previous studies show that auxin signaling, which is mediated by the ROP6 Rho GTPase and its effector RIC1, promotes the ordering of cortical MTs in pavement cells, but it is unknown how RIC1 controls the organization of cortical MTs into well-ordered arrays.ResultsOur genetic screens identified the conserved MT-severing protein katanin (KTN1) as a downstream component of the ROP6-RIC1 signaling pathway leading to well-ordered arrangement of cortical MTs. KTN1 and RIC1 proteins displayed overlapping localization. In vivo and in vitro studies showed that RIC1 physically interacts with and promotes the MT-severing activity of KTN1. Live-cell imaging reveals a role for RIC1 in promoting detachment of branched MTs that is known to rely on KTN1.ConclusionWe have demonstrated that a Rho GTPase signaling pathway regulates katanin-mediated MT severing in plant cells and uncovered an explicit regulatory mechanism underpinning the alignment and ordering of cortical MTs in plants. Our findings provide new insights into regulatory mechanisms underlying growth stimuli such as auxin promote the organization of cortical MTs into parallel arrays in plants

    An Algorithm for Counting the Number of 2n2^n-Periodic Binary Sequences with Fixed kk-Error Linear Complexity

    Get PDF
    The linear complexity and kk-error linear complexity of sequences are important measures of the strength of key-streams generated by stream ciphers. The counting function of a sequence complexity measure gives the number of sequences with given complexity measure value and it is useful to determine the expected value and variance of a given complexity measure of a family of sequences. Fu et al. studied the distribution of 2n2^n-periodic binary sequences with 1-error linear complexity in their SETA 2006 paper and peoples have strenuously promoted the solving of this problem from k=2k=2 to k=4k=4 step by step. Unfortunately, it still remains difficult to obtain the solutions for larger kk and the counting functions become extremely complex when kk become large. In this paper, we define an equivalent relation on error sequences. We use a concept of \textit{cube fragment} as basic modules to construct classes of error sequences with specific structures. Error sequences with the same specific structures can be represented by a single \textit{symbolic representation}. We introduce concepts of \textit{trace}, \textit{weight trace} and \textit{orbit} of sets to build quantitative relations between different classes. Based on these quantitative relations, we propose an algorithm to automatically generate symbolic representations of classes of error sequences, calculate \textit{coefficients} from one class to another and compute \textit{multiplicity} of classes defined based on specific equivalence on error sequences. This algorithm can efficiently get the number of sequences with given kk-error linear complexity. The time complexity of this algorithm is O(2klogk)O(2^{k\log k}) in the worst case which does not depend on the period 2n2^n

    Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning

    Full text link
    Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data. However, FL faces a significant challenge in the form of heterogeneous data distributions among clients, which leads to a reduction in performance and robustness. A recent approach to mitigating the impact of heterogeneous data distributions is through the use of foundation models, which offer better performance at the cost of larger computational overheads and slower inference speeds. We introduce foundation model distillation to assist in the federated training of lightweight client models and increase their performance under heterogeneous data settings while keeping inference costs low. Our results show improvement in the global model performance on a balanced testing set, which contains rarely observed samples, even under extreme non-IID client data distributions. We conduct a thorough evaluation of our framework with different foundation model backbones on CIFAR10, with varying degrees of heterogeneous data distributions ranging from class-specific data partitions across clients to dirichlet data sampling, parameterized by values between 0.01 and 1.0.Comment: 6 Pages + Appendice

    Text-driven Prompt Generation for Vision-Language Models in Federated Learning

    Full text link
    Prompt learning for vision-language models, e.g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons. Existing prompt learning techniques replace hand-crafted text prompts with learned vectors that offer improvements on seen classes, but struggle to generalize to unseen classes. Our work addresses this challenge by proposing Federated Text-driven Prompt Generation (FedTPG), which learns a unified prompt generation network across multiple remote clients in a scalable manner. The prompt generation network is conditioned on task-related text input, thus is context-aware, making it suitable to generalize for both seen and unseen classes. Our comprehensive empirical evaluations on nine diverse image classification datasets show that our method is superior to existing federated prompt learning methods, that achieve overall better generalization on both seen and unseen classes and is also generalizable to unseen datasets

    Peptidome workflow of serum and urine samples for biomarker discovery

    Get PDF
    Peptidomics plays an important role in clinical proteomics and disease-associated biomarker discovery. It has exhibited mounting potential in early noninvasive diagnosis, prognosis, and treatment evaluation of diseases. This article presents an introduction of peptidomics, the entire peptidomic workflows for serum and urine samples, and a brief overview of recent works in this area. The review is designed to enable researchers to find the most suited strategy for their peptidome studies.Natural Science Foundation of China; Fujian Province Department of Science Technolog
    corecore