44 research outputs found

    An Alternative Approach for Computing Discrete Logarithms in Compressed SIDH

    Get PDF
    Currently, public-key compression of supersingular isogeny Diffie-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. Both of them require large storage for precomputation to accelerate the performance. In this paper, we propose a novel method to compute only three discrete logarithms instead of four, in exchange for computing a lookup table efficiently. We also suggest another alternative method to compute discrete logarithms with small storage. Our implementation shows that the efficiency of our first method is close to that of the previous work, and our algorithms perform better in some special cases. Although the implementation of the second method is not as efficient as the state of the art, the storage is reduced by a factor of about 3:77 to about 22:86. In particular, the storage requirement for discrete logarithms of the order-3e33^{e_3} multiplicative group decreases from 390.00 KiB to 17.06 KiB when using the 751-bit prime. We believe that the latter method will be highly attractive in memory constrained environments

    Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning

    Full text link
    Parameter-efficient tuning methods (PETs) have achieved promising results in tuning large pre-trained language models (PLMs). By formalizing frozen PLMs and additional tunable parameters as systems and controls respectively, PETs can be theoretically grounded to optimal control and further viewed as optimizing the terminal cost and running cost in the optimal control literature. Despite the elegance of this theoretical grounding, in practice, existing PETs often ignore the running cost and only optimize the terminal cost, i.e., focus on optimizing the loss function of the output state, regardless of the running cost that depends on the intermediate states. Since it is non-trivial to directly model the intermediate states and design a running cost function, we propose to use latent stochastic bridges to regularize the intermediate states and use the regularization as the running cost of PETs. As the first work to propose regularized PETs that use stochastic bridges as the regularizers (running costs) for the intermediate states, we show the effectiveness and generality of this regularization across different tasks, PLMs and PETs. In view of the great potential and capacity, we believe more sophisticated regularizers can be designed for PETs and better performance can be achieved in the future. The code is released at \url{https://github.com/thunlp/stochastic-bridge-pet/tree/main}.Comment: ACL 2023 Finding

    Faster Public-key Compression of SIDH with Less Memory

    Get PDF
    In recent years, the isogeny-based protocol, namely supersingular isogeny Diffe-Hellman (SIDH) has become highly attractive for its small public key size. In addition, public-key compression makes supersingular isogeny key encapsulation scheme (SIKE) more competitive in the NIST post-quantum cryptography standardization effort. However, compared to other post-quantum protocols, the computational cost of SIDH is relatively high, and so is public-key compression. On the other hand, the storage for pairing computation and discrete logarithms to speed up the current implementation of the key compression is somewhat large. In this paper, we mainly improve the performance of public-key compression of SIDH, especially the effciency and the storage of pairing computation involved. Our experimental results show that the memory requirement for pairing computation is reduced by a factor of about 1.51.5, and meanwhile, the instantiation of key generation of SIDH is 4.06%∼7.23%4.06\% ∼ 7.23\% faster than the current state-of-the-art

    Public-key Compression in M-SIDH

    Get PDF
    Recently, SIKE was broken by the Castryck-Decru attack in polynomial time. To avoid this attack, Fouotsa et al. proposed a SIDH-like scheme called M-SIDH, which hides the information of auxiliary points. The countermeasure also leads to huge parameter sizes, and correspondingly the public key size is relatively large. In this paper, we propose compressed M-SIDH, which is reminiscent of compressed SIDH. Compared with SIDH, the isogeny degrees in MSIDH consist of many factor primes, and thus most of the techniques used in compressed SIDH can not be applied into compressed M-SIDH directly. To overcome this issue, we apply several novel techniques to compress the public key of M-SIDH. We also show that our approach to compress the public key of M-SIDH is valid and prove that compressed M-SIDH is secure as long as M-SIDH is secure. In addition, we present new algorithms to accelerate the performance of public-key compression in M-SIDH. We provide a proof-of-concept implementation of compressed M-SIDH in SageMath. Experimental results show that our approach fits well with compressed M-SIDH. It should be noted that most techniques proposed in this work could be also utilized into other SIDH-like protocols

    Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

    Full text link
    Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.Comment: EMNLP 2023 Finding

    Exploring Universal Intrinsic Task Subspace via Prompt Tuning

    Full text link
    Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to broad NLP tasks differing a lot superficially? In this work, we empirically find evidence indicating that the adaptations of PLMs to various few-shot tasks can be reparameterized as optimizing only a few free parameters in a unified low-dimensional intrinsic task subspace, which may help us understand why PLMs could easily adapt to various NLP tasks with small-scale data. To find such a subspace and examine its universality, we propose an analysis pipeline called intrinsic prompt tuning (IPT). Specifically, we resort to the recent success of prompt tuning and decompose the soft prompts of multiple NLP tasks into the same low-dimensional nonlinear subspace, then we learn to adapt the PLM to unseen data or tasks by only tuning parameters in this subspace. In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace. Besides being an analysis tool, IPT could further bring practical benefits, such as improving the prompt tuning stability.Comment: Withdrawn from Findings of ACL 202

    An acquired phosphatidylinositol 4-phosphate transport initiates T-cell deterioration and leukemogenesis

    Get PDF
    Publisher Copyright: © 2022, The Author(s).Lipid remodeling is crucial for malignant cell transformation and tumorigenesis, but the precise molecular processes involved and direct evidences for these in vivo remain elusive. Here, we report that oxysterol-binding protein (OSBP)-related protein 4 L (ORP4L) is expressed in adult T-cell leukemia (ATL) cells but not normal T-cells. In ORP4L knock-in T-cells, ORP4L dimerizes with OSBP to control the shuttling of OSBP between the Golgi apparatus and the plasma membrane (PM) as an exchanger of phosphatidylinositol 4-phosphate [PI(4)P]/cholesterol. The PI(4)P arriving at the PM via this transport machinery replenishes phosphatidylinositol 4,5-bisphosphate [PI(4,5)P-2] and phosphatidylinositol (3,4,5) trisphosphate [PI(3,4,5)P-3] biosynthesis, thus contributing to PI3K/AKT hyperactivation and T-cell deterioration in vitro and in vivo. Disruption of ORP4L and OSBP dimerization disables PI(4)P transport and T-cell leukemogenesis. In summary, we identify a non-vesicular lipid transport machinery between Golgi and PM maintaining the oncogenic signaling competence initiating T-cell deterioration and leukemogenesis. The oxysterol-binding protein-related protein 4 (ORP4L) is expressed in T-cell acute lymphoblastic leukemia and is required for leukemogenesis. Here the authors show that ORP4L orchestrates the transport of the phospholipid PI(4)P from Golgi to the plasma membrane, contributing to PI3K/AKT hyperactivation and T-cell leukemogenesis.Peer reviewe

    ELK4 exerts opposite roles in cytokine/chemokine production and degranulation in activated mast cells

    Get PDF
    The proliferative potential of mast cells after activation for 3-4h was found to be decreased, which suggests that mast cell degranulation and cell proliferation are differentially regulated. ELK4, a member of the ternary complex factor (TCF) subfamily of Ets transcription factors, is one of the downstream effectors of MAPK signaling that is critical for cell proliferation. And Elk4 has been identified to be vital for macrophage activation in response to zymosan and the transcriptional response to 12-O-tetrade canoyl phorbol-13-acetate (TPA) stimulation in fibroblast. However, the effect of ELK4 on the mast cell transcriptional response to FcϵRI and GPCR mediated activation and its potential functional significance in mast cells remain unclear. Here, we showed that ELK4 expression is downregulated in activated mast cells. Elk4 knockout suppresses cell proliferation and impedes the cell cycle in bone marrow-derived mast cells (BMMCs), which is associated with decreased transcription of cell cycle genes. Additionally, the transcriptional activation of cytokines and chemokines is diminished while mast cell degranulation is enhanced in Elk4 knockout BMMCs. Mechanistically, ELK4 might positively modulate Hdc, Ccl3 and Ccl4 transcription by interacting with MITF and negatively regulate the transcription of degranulation-related genes by complexing with SIRT6. Overall, our study identifies a new physiological role of the transcription factor ELK4 in mast cell proliferation and activation

    3D genome architecture coordinates trans and cis regulation of differentially expressed ear and tassel genes in maize.

    Get PDF
    BACKGROUND: Maize ears and tassels are two separate types of inflorescence which are initiated by similar developmental processes but gradually develop distinct architectures. However, coordinated trans and cis regulation of differentially expressed genes determining ear and tassel architecture within the 3D genome context is largely unknown. RESULTS: We identify 56,055 and 52,633 open chromatin regions (OCRs) in developing maize ear and tassel primordia using ATAC-seq and characterize combinatorial epigenome features around these OCRs using ChIP-seq, Bisulfite-seq, and RNA-seq datasets. Our integrative analysis of coordinated epigenetic modification and transcription factor binding to OCRs highlights the cis and trans regulation of differentially expressed genes in ear and tassel controlling inflorescence architecture. We further systematically map chromatin interactions at high-resolution in corresponding tissues using in situ digestion-ligation-only Hi-C (DLO Hi-C). The extensive chromatin loops connecting OCRs and genes provide a 3D view on cis- and trans-regulatory modules responsible for ear- and tassel-specific gene expression. We find that intergenic SNPs tend to locate in distal OCRs, and our chromatin interaction maps provide a potential mechanism for trait-associated intergenic SNPs that may contribute to phenotypic variation by influencing target gene expression through chromatin loops. CONCLUSIONS: Our comprehensive epigenome annotations and 3D genome maps serve as valuable resource and provide a deep understanding of the complex regulatory mechanisms of genes underlying developmental and morphological diversities between maize ear and tassel
    corecore